Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Japanese OCR via NDLOCR-Lite (National Diet Library). Trigger on 'OCR this image', '日文OCR', 'recognize Japanese text', or any request to extract text from Ja...
Japanese OCR via NDLOCR-Lite (National Diet Library). Trigger on 'OCR this image', '日文OCR', 'recognize Japanese text', or any request to extract text from Ja...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Local Japanese OCR powered by NDLOCR-Lite from Japan's National Diet Library. Runs on CPU (Apple Silicon / x86), no GPU or API key required.
TargetQualityPrinted Japanese (活字)ExcellentVertical text (縦書き)ExcellentEnglish textGoodHandwritten Japanese (手書き)Experimental
Run scripts/ocr-cli.sh from the skill root directory: <SKILL_ROOT>/scripts/ocr-cli.sh <image_path> # → plain text to stdout <SKILL_ROOT>/scripts/ocr-cli.sh <image_path> --json # → JSON with bounding boxes <SKILL_ROOT>/scripts/ocr-cli.sh <image_path> --viz # → also saves visualization <SKILL_ROOT>/scripts/ocr-cli.sh <dir_path> # → batch all images in dir
text (default): one line per detected text region. json: { "contents": [[ { "boundingBox": [[x1,y1],[x1,y2],[x2,y1],[x2,y2]], "text": "recognized text", "confidence": 0.95, "isVertical": "true" } ]], "imginfo": { "img_width": 1920, "img_height": 1080 } } viz: saves viz_<filename> bounding-box overlay image to the output directory.
~2-3 seconds per image on Apple Silicon (CPU) Formats: JPG, PNG, TIFF, JP2, BMP Charset: ~7000 characters (JIS kanji + kana + ASCII + Greek)
Layout detection: DEIMv2 (ONNX) Text recognition: PARSeq cascade (30/50/100 char models, ONNX) Reading order: xy-cut algorithm
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.