← All skills
Tencent SkillHub Β· AI

Pdf Ocr Tool

Intelligent PDF and image to Markdown converter using Ollama GLM-OCR with smart content detection (text/table/figure)

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Intelligent PDF and image to Markdown converter using Ollama GLM-OCR with smart content detection (text/table/figure)

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
README.md, SKILL.md, _meta.json, analyzer.py, hooks/install-deps.sh, hooks/post-install.sh

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.3.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 27 sections Open source page

PDF OCR Tool - Intelligent PDF to Markdown Converter

Uses the Ollama GLM-OCR model to intelligently recognize text, tables, and figures in PDF pages, applying the most appropriate prompts for OCR processing and outputting structured Markdown documents.

Features

βœ… Smart Content Detection: Automatically identifies page content type (text/table/figure) βœ… Mixed Mode: Splits pages into multiple regions for processing different content types βœ… Multiple Processing Modes: Supports text, table, figure, mixed, and auto modes βœ… PDF Page-by-Page Processing: Converts PDF to images and processes each page βœ… Image OCR: Supports OCR for single images βœ… Custom Prompts: Adjustable OCR prompts based on requirements βœ… Flexible Configuration: Customizable Ollama host, port, and model βœ… uv Package Management: Uses uv for Python dependency management

1. Prerequisites

# Install Ollama curl -fsSL https://ollama.com/install.sh | sh ollama pull glm-ocr:q8_0 # Install poppler-utils (for PDF to image conversion) sudo apt install poppler-utils # Debian/Ubuntu brew install poppler # macOS # Install uv package manager curl -LsSf https://astral.sh/uv/install.sh | sh

2. Install with uv (Recommended)

cd skills/pdf-ocr-tool uv venv source .venv/bin/activate uv add requests Pillow

3. Install via ClawHub

npx clawhub install pdf-ocr-tool

4. Manual Installation

# Clone or download skill git clone <repo> ~/.openclaw/workspace/skills/pdf-ocr-tool # Create virtual environment and install dependencies cd ~/.openclaw/workspace/skills/pdf-ocr-tool uv venv source .venv/bin/activate uv add requests Pillow # Run post-install script bash hooks/post-install.sh

Basic Usage

# Auto-detect content type (recommended) python ocr_tool.py --input document.pdf --output result.md # Specify processing mode python ocr_tool.py --input document.pdf --output result.md --mode text python ocr_tool.py --input document.pdf --output result.md --mode table python ocr_tool.py --input document.pdf --output result.md --mode figure # Mixed mode: split page into regions python ocr_tool.py --input document.pdf --output result.md --granularity region # Process a single image python ocr_tool.py --input image.png --output result.md --mode mixed

Advanced Configuration

# Specify Ollama host and port python ocr_tool.py --input document.pdf --output result.md \ --host localhost --port 11434 # Use different model python ocr_tool.py --input document.pdf --output result.md \ --model glm-ocr:q8_0 # Custom prompt python ocr_tool.py --input image.png --output result.md \ --prompt "Convert this table to Markdown format, keeping rows and columns aligned" # Save figure region images python ocr_tool.py --input document.pdf --output result.md --save-images

Environment Configuration

# Set default configuration export OLLAMA_HOST="localhost" export OLLAMA_PORT="11434" export OCR_MODEL="glm-ocr:q8_0" # Run python ocr_tool.py --input document.pdf --output result.md

Processing Modes

ModeDescriptionUse CaseautoAuto-detect content typeGeneral use (default)textPure text recognitionAcademic papers, articles, reportstableTable recognitionData tables, financial reportsfigureChart/figure recognitionStatistical charts, flowcharts, diagramsmixedMixed modePages with multiple content types

Mixed Mode (Granularity)

When using --granularity region: Page is split vertically into multiple regions (default: 3) Each region is independently analyzed for content type Corresponding prompts are used for OCR Final results are combined into complete Markdown

PDF Output Example

# PDF to Markdown Result **Total Pages**: 15 **Model**: glm-ocr:q8_0 **Mode**: auto **Generated**: 2026-02-27T01:00:00+08:00 --- ## Page 1 *Type: mixed* ### Region 1 (text) [OCR recognized text content] ### Region 2 (table) <table> <tr><th>Column 1</th><th>Column 2</th></tr> <tr><td>Data 1</td><td>Data 2</td></tr> </table> ### Region 3 (figure) [Chart description] ![Chart](./images/page_1_region_3.png) ---

Image Output Example

# image.png OCR Result Model: glm-ocr:q8_0 Mode: table --- [OCR recognized result]

Prompt Templates

The tool includes four built-in prompt templates in the prompts/ directory:

Text Mode (prompts/text.md)

  • Convert the text in this region to Markdown format.
  • Preserve paragraph structure and heading levels
  • Handle lists correctly
  • Preserve mathematical formulas
  • Maintain citations and references

Table Mode (prompts/table.md)

  • Convert the table in this region to Markdown table format.
  • Maintain row and column alignment
  • Preserve all data and values
  • Handle merged cells
  • Preserve headers and units

Figure Mode (prompts/figure.md)

Analyze the chart or image in this region: 1. Chart type (bar, line, pie, flowchart, etc.) 2. Titles and axis labels 3. Data trends and key observations 4. Important values and anomalies Describe in Markdown format.

Using in OpenClaw

import subprocess from pathlib import Path # Process PDF (auto mode) subprocess.run([ "python", "skills/pdf-ocr-tool/ocr_tool.py", "--input", "/path/to/document.pdf", "--output", "/tmp/result.md", "--mode", "auto" ]) # Read result with open("/tmp/result.md", "r") as f: markdown_content = f.read() # Process single image (table mode) subprocess.run([ "python", "skills/pdf-ocr-tool/ocr_tool.py", "--input", "/path/to/table.png", "--output", "/tmp/table.md", "--mode", "table" ]) # Mixed mode for complex PDF subprocess.run([ "python", "skills/pdf-ocr-tool/ocr_tool.py", "--input", "/path/to/mixed.pdf", "--output", "/tmp/mixed.md", "--granularity", "region", # Split into regions "--save-images" # Save figure images ])

Model Not Installed

ollama pull glm-ocr:q8_0

Service Not Running

ollama serve

Missing pdftoppm

sudo apt install poppler-utils # Debian/Ubuntu brew install poppler # macOS

Poor OCR Results

Try different modes: --mode text or --mode mixed Use custom prompts: --prompt "your prompt here" Check image quality (resolution, clarity) Try mixed mode: --granularity region

Dependency Issues

cd skills/pdf-ocr-tool source .venv/bin/activate uv sync # Reinstall all dependencies

Related Resources

Ollama API Documentation GLM-OCR Model Page poppler-utils uv Package Manager

Version History

v1.2.0 - English prompts, install-deps.sh, fixed .gitignore v1.1.0 - Added mixed mode, region splitting, pyproject.toml v1.0.0 - Initial version with basic OCR functionality

Credits

This tool is developed and maintained by the OpenClaw community.

License

MIT License

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Scripts2 Docs1 Config
  • SKILL.md Primary doc
  • README.md Docs
  • analyzer.py Scripts
  • hooks/install-deps.sh Scripts
  • hooks/post-install.sh Scripts
  • _meta.json Config