Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Intelligent PDF and image to Markdown converter using Ollama GLM-OCR with smart content detection (text/table/figure)
Intelligent PDF and image to Markdown converter using Ollama GLM-OCR with smart content detection (text/table/figure)
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Uses the Ollama GLM-OCR model to intelligently recognize text, tables, and figures in PDF pages, applying the most appropriate prompts for OCR processing and outputting structured Markdown documents.
β Smart Content Detection: Automatically identifies page content type (text/table/figure) β Mixed Mode: Splits pages into multiple regions for processing different content types β Multiple Processing Modes: Supports text, table, figure, mixed, and auto modes β PDF Page-by-Page Processing: Converts PDF to images and processes each page β Image OCR: Supports OCR for single images β Custom Prompts: Adjustable OCR prompts based on requirements β Flexible Configuration: Customizable Ollama host, port, and model β uv Package Management: Uses uv for Python dependency management
# Install Ollama curl -fsSL https://ollama.com/install.sh | sh ollama pull glm-ocr:q8_0 # Install poppler-utils (for PDF to image conversion) sudo apt install poppler-utils # Debian/Ubuntu brew install poppler # macOS # Install uv package manager curl -LsSf https://astral.sh/uv/install.sh | sh
cd skills/pdf-ocr-tool uv venv source .venv/bin/activate uv add requests Pillow
npx clawhub install pdf-ocr-tool
# Clone or download skill git clone <repo> ~/.openclaw/workspace/skills/pdf-ocr-tool # Create virtual environment and install dependencies cd ~/.openclaw/workspace/skills/pdf-ocr-tool uv venv source .venv/bin/activate uv add requests Pillow # Run post-install script bash hooks/post-install.sh
# Auto-detect content type (recommended) python ocr_tool.py --input document.pdf --output result.md # Specify processing mode python ocr_tool.py --input document.pdf --output result.md --mode text python ocr_tool.py --input document.pdf --output result.md --mode table python ocr_tool.py --input document.pdf --output result.md --mode figure # Mixed mode: split page into regions python ocr_tool.py --input document.pdf --output result.md --granularity region # Process a single image python ocr_tool.py --input image.png --output result.md --mode mixed
# Specify Ollama host and port python ocr_tool.py --input document.pdf --output result.md \ --host localhost --port 11434 # Use different model python ocr_tool.py --input document.pdf --output result.md \ --model glm-ocr:q8_0 # Custom prompt python ocr_tool.py --input image.png --output result.md \ --prompt "Convert this table to Markdown format, keeping rows and columns aligned" # Save figure region images python ocr_tool.py --input document.pdf --output result.md --save-images
# Set default configuration export OLLAMA_HOST="localhost" export OLLAMA_PORT="11434" export OCR_MODEL="glm-ocr:q8_0" # Run python ocr_tool.py --input document.pdf --output result.md
ModeDescriptionUse CaseautoAuto-detect content typeGeneral use (default)textPure text recognitionAcademic papers, articles, reportstableTable recognitionData tables, financial reportsfigureChart/figure recognitionStatistical charts, flowcharts, diagramsmixedMixed modePages with multiple content types
When using --granularity region: Page is split vertically into multiple regions (default: 3) Each region is independently analyzed for content type Corresponding prompts are used for OCR Final results are combined into complete Markdown
# PDF to Markdown Result **Total Pages**: 15 **Model**: glm-ocr:q8_0 **Mode**: auto **Generated**: 2026-02-27T01:00:00+08:00 --- ## Page 1 *Type: mixed* ### Region 1 (text) [OCR recognized text content] ### Region 2 (table) <table> <tr><th>Column 1</th><th>Column 2</th></tr> <tr><td>Data 1</td><td>Data 2</td></tr> </table> ### Region 3 (figure) [Chart description]  ---
# image.png OCR Result Model: glm-ocr:q8_0 Mode: table --- [OCR recognized result]
The tool includes four built-in prompt templates in the prompts/ directory:
Analyze the chart or image in this region: 1. Chart type (bar, line, pie, flowchart, etc.) 2. Titles and axis labels 3. Data trends and key observations 4. Important values and anomalies Describe in Markdown format.
import subprocess from pathlib import Path # Process PDF (auto mode) subprocess.run([ "python", "skills/pdf-ocr-tool/ocr_tool.py", "--input", "/path/to/document.pdf", "--output", "/tmp/result.md", "--mode", "auto" ]) # Read result with open("/tmp/result.md", "r") as f: markdown_content = f.read() # Process single image (table mode) subprocess.run([ "python", "skills/pdf-ocr-tool/ocr_tool.py", "--input", "/path/to/table.png", "--output", "/tmp/table.md", "--mode", "table" ]) # Mixed mode for complex PDF subprocess.run([ "python", "skills/pdf-ocr-tool/ocr_tool.py", "--input", "/path/to/mixed.pdf", "--output", "/tmp/mixed.md", "--granularity", "region", # Split into regions "--save-images" # Save figure images ])
ollama pull glm-ocr:q8_0
ollama serve
sudo apt install poppler-utils # Debian/Ubuntu brew install poppler # macOS
Try different modes: --mode text or --mode mixed Use custom prompts: --prompt "your prompt here" Check image quality (resolution, clarity) Try mixed mode: --granularity region
cd skills/pdf-ocr-tool source .venv/bin/activate uv sync # Reinstall all dependencies
Ollama API Documentation GLM-OCR Model Page poppler-utils uv Package Manager
v1.2.0 - English prompts, install-deps.sh, fixed .gitignore v1.1.0 - Added mixed mode, region splitting, pyproject.toml v1.0.0 - Initial version with basic OCR functionality
This tool is developed and maintained by the OpenClaw community.
MIT License
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.