Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Enhanced BOOK BRAIN for LYGO Havens with visual capability. Use to design and maintain a 3-brain filesystem + memory system that also integrates LEFT/RIGHT brain visual checking (browser, images, screenshots) with text and API data for deeper verification and retrieval. Recommended for agents with visual tools or browser automation; use original book-brain only on non-visual systems.
Enhanced BOOK BRAIN for LYGO Havens with visual capability. Use to design and maintain a 3-brain filesystem + memory system that also integrates LEFT/RIGHT brain visual checking (browser, images, screenshots) with text and API data for deeper verification and retrieval. Recommended for agents with visual tools or browser automation; use original book-brain only on non-visual systems.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
This is the enhanced, visual-aware version of BOOK BRAIN. BOOK BRAIN (original) → filesystem + memory structure only (no visual assumptions). BOOK BRAIN VISUAL READER → everything from BOOK BRAIN plus a LEFT/RIGHT brain protocol for visual + text + API cross-checking. Use this skill when: Your agent has access to visual tools (browser snapshots, image readers, screenshot analyzers, PDF/image OCR, etc.) You want a 3-brain filesystem and a 2-hemisphere reasoning mode: LEFT brain → structure, text, indexes, APIs RIGHT brain → visual context, layouts, screenshots, charts, seals You need to double-check data visually on webpages or images and log where it came from. This is a utility + reference guide, not a persona. It does not change your voice. It teaches your system how to think and store.
If your system has no visual capabilities → use book-brain (original). If your system can see (browser snapshots, image tools, etc.) → use BOOK BRAIN VISUAL READER instead. Both share the same core: 3-brain model (Working / Library / Outer) Non-destructive filesystem layout Reference stubs and indexes VISUAL READER adds: LEFT/RIGHT brain protocols for how to combine visual, text, and API data Guidance on how to organize visual evidence (screenshots, seals, charts) alongside text files Patterns for “5D” data gathering (visual + text + API + state + timeline).
BOOK BRAIN VISUAL READER assumes:
Working Brain – current context, tmp/, active tabs / current screenshots. Library Brain – filesystem (memory/, reference/, brainwave/, state/, logs/, tools/). Outer Brain – external sources (websites, Clawdhub skills, block explorers, dashboards, ON-chain receipts, EternalHaven.ca, etc.) referenced via small text files.
LEFT brain (structure/verbal/API): text files, JSON, logs, indexes, schemas, SKILL.md, APIs. strong at structure, sequences, constraints, receipts. RIGHT brain (visual/spatial): browser snapshots, screenshots, photos of diagrams, seals, dashboards. strong at layout, pattern recognition, anomalies, gestalt sense. Agents using this skill should consciously switch modes: LEFT for “what is the exact data / file / receipt?” RIGHT for “what does the whole picture look like, and does anything feel off?”
Same base layout as BOOK BRAIN (non-destructive): memory/ → daily logs, raw notes, per-day files. reference/ → stable docs, protocols, whitepapers, schemas. brainwave/ → platform/domain protocols (MoltX, Clawhub, LYGO, etc.). state/ → machine-readable state (indexes, hashes, last-run info). logs/ → technical/health logs, setup logs, audit logs. tools/ → scripts & utilities. tmp/ → scratch work. Visual-aware additions (optional but recommended): visual/ → for long-term visual artifacts visual/screenshots/ visual/dashboards/ visual/seals/ reference/VISUAL_INDEX.txt → mapping of important visual assets to topics. Rules: Never overwrite existing files. If visual/ already exists, extend it; if not, create it. If unsure, create new files with dates or suffixes and let humans/agents merge later. See references/book-brain-visual-examples.md for concrete trees and snippets.
When an agent needs to verify something from the web or an image, use this simple protocol:
Look up the relevant concept in indexes/state: state/memory_index.json reference/INDEX.txt domain-specific indexes (e.g. reference/CLAWDHUB_SKILLS.md). Use APIs or structured data where possible (e.g. on-chain RPC, REST endpoints, JSON feeds). Record what you expect to see visually: numbers, labels, approximate layout.
Capture a snapshot (browser screenshot, image, PDF page). Use a vision tool (or human reading) to extract: key figures headings anomalies (warnings, red banners, weird UI states). Ask: “Does this visual match what the LEFT brain expected?”
If they match: Write a short note in a relevant file (e.g. daily_health.md or topic log) with: timestamp data point source URLs location of stored screenshot (if saved). If they disagree: Log the discrepancy (LEFT vs RIGHT). Prefer receipts (on-chain, auditable APIs) over UI; treat UI oddities as signals to investigate. Do not silently side with one hemisphere; explain the conflict when answering. This is the “5D” blend: text + visual + API + state + timeline.
On a visual-capable Haven (browser + image tools): Detect capabilities Does the agent have a browser automation tool? Can it read images (vision model, OCR, screenshot-to-text)? Check filesystem As in BOOK BRAIN: check/create memory/, reference/, brainwave/, state/, logs/, tools/, tmp/. If missing, suggest creating visual/ with screenshots/, dashboards/, seals/. Create visual index stubs reference/VISUAL_INDEX.txt Optionally topic-specific visual indexes (e.g. reference/STARCORE_VISUALS.txt). Log the setup Add entry to logs/book_brain_visual_setup.log or daily_health.md. Respect all non-destructive rules No overwrites. No deletions. Ask the human before making structural changes.
BOOK BRAIN VISUAL READER lives inside the broader LYGO × Eternal Haven ecosystem. Key links: Eternal Haven portal → https://EternalHaven.ca LYGO Champion Hub → https://deepseekoracle.github.io/Excavationpro/LYGO-Network/champions.html#champions Clawdhub skills (LYGO stack, Champions, BOOK BRAIN) → https://clawhub.ai/u/DeepSeekOracle Extended work / support → https://www.patreon.com/cw/Excavationpro If this skill helps you or your agents build a more coherent, visual + textual Haven, support is welcome but never required.
When an agent loads book-brain-visual-reader: Understand it is a helper, not a persona. On first use: Map the filesystem and capabilities. Propose visual folders and indexes; ask before creating. Set up lightweight indexes (INDEX.txt, VISUAL_INDEX.txt, state/memory_index.json). On subsequent uses: Use LEFT/RIGHT protocol when verifying external data. Save only important visual artifacts under visual/. Update indexes + logs instead of rewriting big files. Never silently delete or overwrite existing content. For concrete examples and suggested layouts, read references/book-brain-visual-examples.md in this skill.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.