Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Read Calibre catalog data via calibredb over a Content server, and run one-book analysis workflow that writes HTML analysis block back to comments while caching analysis state in SQLite. Use for list/search/id lookups and AI reading pipeline for a selected book.
Read Calibre catalog data via calibredb over a Content server, and run one-book analysis workflow that writes HTML analysis block back to comments while caching analysis state in SQLite. Use for list/search/id lookups and AI reading pipeline for a selected book.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Use this skill for: Read-only catalog lookup (list/search/id) One-book AI reading workflow (export -> analyze -> cache -> comments HTML apply)
calibredb available on PATH in the runtime where scripts are executed. ebook-convert available for text extraction. subagent-spawn-command-builder installed (for spawn payload generation). Reachable Calibre Content server URL in --with-library format: http://HOST:PORT/#LIBRARY_ID Do not assume localhost/127.0.0.1; always pass explicit reachable HOST:PORT. If auth is enabled: Preferred: set in /home/altair/.openclaw/.env CALIBRE_USERNAME=<user> CALIBRE_PASSWORD=<password> Then pass only --password-env CALIBRE_PASSWORD (username auto-loads from env) You can still override with --username <user> explicitly. Optional auth cache file: ~/.config/calibre-catalog-read/auth.json Avoid --save-plain-password unless explicitly requested.
List books (JSON): node skills/calibre-catalog-read/scripts/calibredb_read.mjs list \ --with-library "http://192.168.11.20:8080/#Calibreライブラリ" \ --password-env CALIBRE_PASSWORD \ --limit 50 Search books (JSON): node skills/calibre-catalog-read/scripts/calibredb_read.mjs search \ --with-library "http://192.168.11.20:8080/#Calibreライブラリ" \ --password-env CALIBRE_PASSWORD \ --query 'series:"中公文庫"' Get one book by id (JSON): node skills/calibre-catalog-read/scripts/calibredb_read.mjs id \ --with-library "http://192.168.11.20:8080/#Calibreライブラリ" \ --password-env CALIBRE_PASSWORD \ --book-id 3 Run one-book pipeline (analyze + comments HTML apply + cache): uv run python skills/calibre-catalog-read/scripts/run_analysis_pipeline.py \ --with-library "http://192.168.11.20:8080/#Calibreライブラリ" \ --password-env CALIBRE_PASSWORD \ --book-id 3 --lang ja
Initialize DB schema: uv run python skills/calibre-catalog-read/scripts/analysis_db.py init \ --db skills/calibre-catalog-read/state/calibre_analysis.sqlite Check current hash state: uv run python skills/calibre-catalog-read/scripts/analysis_db.py status \ --db skills/calibre-catalog-read/state/calibre_analysis.sqlite \ --book-id 3 --format EPUB
Use this split to avoid long blocking turns on chat listeners.
Validate user intent and target book_id. Confirm subagent runtime knobs: model, thinking, runTimeoutSeconds. Start subagent and return a short progress reply quickly. After subagent result arrives, run DB upsert + Calibre apply. Report final result to user.
Read extracted source payload. Generate analysis JSON strictly by schema. Do not run metadata apply or user-facing channel actions.
Long-form content analysis generation. Multi-step heavy reasoning over full excerpts.
One book per run. Prefer asynchronous flow: quick ack first, final result after analysis. If analysis is unavailable, either ask user or use fallback only when explicitly acceptable.
Before first subagent run in a session, confirm once: model thinking (low/medium/high) runTimeoutSeconds Do not ask on every run. Reuse the confirmed settings for subsequent books in the same session unless the user asks to change them.
Book-reading analysis is a heavy task. Use a subagent with a lightweight model for analysis generation, then return results to main agent for cache/apply steps. Prompt template: references/subagent-analysis.prompt.md Input schema: references/subagent-input.schema.json Output schema: references/subagent-analysis.schema.json Input preparation helper: scripts/prepare_subagent_input.mjs Splits extracted text into multiple files to avoid read-tool single-line size issues. Rules: Use subagent only for heavy analysis generation; keep main agent lightweight and non-blocking. In this environment, Python commands must use uv run python. Use the strict prompt template (references/subagent-analysis.prompt.md) as mandatory base; do not send ad-hoc relaxed read instructions. Keep final DB upsert and Calibre metadata apply in main agent. Process one book per run. Confirm model/thinking/timeout once per session, then reuse; do not hardcode provider-specific model IDs in the skill. Configure callback/announce behavior and rate-limit fallbacks using OpenClaw default model/subagent/fallback settings (not hardcoded in this skill). Exclude manga/comic-centric books from this text pipeline (skip when title/tags indicate manga/comic). If extracted text is too short, stop and ask user for confirmation before continuing. The pipeline returns reason: low_text_requires_confirmation with prompt_en text.
Do not hardcode user-language prose in pipeline scripts. Generate user-visible analysis text from subagent output, with language controlled by user-selected settings and lang input. Fallback local analysis in scripts is generic/minimal; preferred path is subagent output following the prompt template.
run_analysis_pipeline.py is a local script and does not call OpenClaw tools by itself. Subagent execution must be orchestrated by the agent layer using sessions_spawn. Required runtime sequence: Main agent prepares subagent_input.json + chunked source_files from extracted text. Use: node skills/calibre-catalog-read/scripts/prepare_subagent_input.mjs \ --book-id <id> --title "<title>" --lang ja \ --text-path /tmp/book_<id>.txt --out-dir /tmp/calibre_subagent_<id> Main agent uses the shared builder skill subagent-spawn-command-builder to generate the sessions_spawn payload, then calls sessions_spawn. Build with profile calibre-read and run-specific analysis task text. Use the generated JSON as-is (or merge minimal run-specific fields such as label/task text). Subagent reads all source_files and returns analysis JSON (schema-conformant). Main agent passes that file via --analysis-json to run_analysis_pipeline.py for DB/apply. If step 2 is skipped, pipeline falls back to local minimal analysis (only for emergency/testing).
For Discord/chat, always run as two separate turns.
Select one target book. Build spawn payload with subagent-spawn-command-builder (--profile calibre-read + run-specific --task). Call sessions_spawn using that payload. Record run state (runId) via run_state.mjs upsert. Reply to user with selected title + "running in background". Stop turn here.
Trigger: completion announce/event for that run. Run one command only (completion handler): scripts/handle_completion.mjs (get -> apply -> remove, and fail on error). If runId is missing, handler returns stale_or_duplicate and does nothing. Send completion/failure reply from handler result. Hard rule: Never poll/wait/apply in Turn A. Never keep a chat listener turn open waiting for subagent completion.
For one-book-at-a-time operation, keep a single JSON state file: skills/calibre-catalog-read/state/runs.json Use runId as the primary key (subagent execution id). Lifecycle: On spawn acceptance, upsert one record: runId, book_id, title, status: "running", started_at Do not wait/poll inside the same chat turn. On completion announce, load record by runId and run apply. On successful apply, delete that record immediately. On failure, set status: "failed" + error and keep record for retry/debug. Rules: Keep this file small and operational (active/failed records only). Ignore duplicate completion events when record is already removed. If record is missing at completion time, report as stale/unknown run and do not apply blindly. Use helper scripts (avoid ad-hoc env var mistakes): # Turn A: register running task node skills/calibre-catalog-read/scripts/run_state.mjs upsert \ --state skills/calibre-catalog-read/state/runs.json \ --run-id <RUN_ID> --book-id <BOOK_ID> --title "<TITLE>" # Turn B: completion handler (preferred) node skills/calibre-catalog-read/scripts/handle_completion.mjs \ --state skills/calibre-catalog-read/state/runs.json \ --run-id <RUN_ID> \ --analysis-json /tmp/calibre_<BOOK_ID>/analysis.json \ --with-library "http://HOST:PORT/#LIBRARY_ID" \ --password-env CALIBRE_PASSWORD --lang ja
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.