← All skills
Tencent SkillHub · AI

pdf-ocr-layout

基于智谱 GLM-OCR、GLM-4.7 及 GLM-4.6V 的多模态文档深度解析工具。 Use when: - 需要高精度提取文档(PDF/图片)中的表格并转换为 Markdown 格式 - 需要从文档页面中自动裁剪并提取插图、图表为独立文件 - 需要对提取的图表进行深度语义理解(基于 GLM-4.6V 视觉分析) - 需要对提取的表格数据进行逻辑分析(基于 GLM-4.7 文本分析) 核心架构: 1. 视觉提取:GLM-OCR 2. 语义理解:GLM-4.7 (纯文本/表格) + GLM-4.6V (多模态/图像)

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

基于智谱 GLM-OCR、GLM-4.7 及 GLM-4.6V 的多模态文档深度解析工具。 Use when: - 需要高精度提取文档(PDF/图片)中的表格并转换为 Markdown 格式 - 需要从文档页面中自动裁剪并提取插图、图表为独立文件 - 需要对提取的图表进行深度语义理解(基于 GLM-4.6V 视觉分析) - 需要对提取的表格数据进行逻辑分析(基于 GLM-4.7 文本分析) 核心架构: 1. 视觉提取:GLM-OCR 2. 语义理解:GLM-4.7 (纯文本/表格) + GLM-4.6V (多模态/图像)

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, SKILL_zh.md, script/glm_ocr_pipeline.py, script/glm_understanding.py, script/glm_ocr_extract.py

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.2

Documentation

ClawHub primary doc Primary doc: SKILL.md 11 sections Open source page

GLM-OCR Multimodal Deep Analysis

This tool builds a high-precision document parsing pipeline: using GLM-OCR for layout element extraction, calling GLM-4.7 for logical interpretation of table data, and calling GLM-4.6V for multimodal visual interpretation of images and charts.

Pipeline Implementation Architecture

This Skill consists of two core script stages, orchestrated through glm_ocr_pipeline.py:

1. Extraction Stage (scripts/glm_ocr_extract.py)

Core Model: GLM-OCR Function: Responsible for physical layout analysis of documents Output: Extract table HTML and clean to Markdown, automatically crop independent chart image files based on Bbox coordinates, and generate intermediate JSON containing full page reading order

2. Understanding Stage (scripts/glm_understanding.py)

Core Model: GLM-4.7 (text) / GLM-4.6V (visual) Function: Responsible for deep semantic reasoning of content Logic: Tables: Combine full text context, use GLM-4.7 to analyze business meaning of Markdown table data Charts: Combine full text context + cropped images, use GLM-4.6V for multimodal visual analysis

Command Line Invocation

# Run complete pipeline: extraction -> cropping -> understanding analysis, supports input in .pdf, .jpg, .png and other formats python scripts/glm_ocr_pipeline.py \ --file_path "/data/report_page.jpg" \ --output_dir "/data/output"

API Parameter Description

ParameterTypeRequiredDescriptionfile_pathstring✅Absolute path to input file (supports .pdf, .png, .jpg)output_dirstring✅Result output directory (used to save cropped images and JSON reports)

Return Result Structure (JSON)

The tool returns a list containing layout elements and their deep understanding: [ { "type": "table", "bbox": [100, 200, 500, 600], "content_info": "| Revenue | Q1 |\n|---|---|\n| 100M | ... |", "deep_understanding": "(Generated by GLM-4.7) This table shows Q1 2024 revenue data. Combined with the 'market expansion strategy' mentioned in paragraph 3 of the body text, it can be seen that..." }, { "type": "image", "bbox": [100, 700, 500, 900], "content_info": "/data/output/images/report_page_img_2.png", "deep_understanding": "(Generated by GLM-4.6V) This is a system architecture diagram. Visually, it shows the flow of clients connecting to servers through a Load Balancer. Combined with the title 'Fig 3' and context, this diagram is mainly used to illustrate..." } ]

Environment Requirements

Environment variable ZHIPU_API_KEY must be configured Python 3.8+ Dependencies: zhipuai, pillow, beautifulsoup4

1. Model Routing Strategy

Table (表格): Content passed to GLM-4.7, combined with full text Markdown context for logical reasoning Image (图片): Image Base64 encoded and passed to GLM-4.6V, combined with OCR-extracted titles and full text context for multimodal understanding

2. Context Association

All understanding is based on the complete layout logic of the document (Markdown Context), not isolated fragment analysis.

3. PDF Processing

Multi-page PDFs default to processing the first page. For batch processing, please extend the loop logic at the script level.

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Scripts2 Docs
  • SKILL.md Primary doc
  • SKILL_zh.md Docs
  • script/glm_ocr_extract.py Scripts
  • script/glm_ocr_pipeline.py Scripts
  • script/glm_understanding.py Scripts