{
  "schemaVersion": "1.0",
  "item": {
    "slug": "book-brain-visual-reader",
    "name": "BOOK BRAIN VISUAL READER – LYGO 3-Brain + Visual Left/Right Brain Helper",
    "source": "tencent",
    "type": "skill",
    "category": "开发工具",
    "sourceUrl": "https://clawhub.ai/DeepSeekOracle/book-brain-visual-reader",
    "canonicalUrl": "https://clawhub.ai/DeepSeekOracle/book-brain-visual-reader",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/book-brain-visual-reader",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=book-brain-visual-reader",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "references/book-brain-visual-examples.md",
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-23T16:43:11.935Z",
      "expiresAt": "2026-04-30T16:43:11.935Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=4claw-imageboard",
        "contentDisposition": "attachment; filename=\"4claw-imageboard-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/book-brain-visual-reader"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/book-brain-visual-reader",
    "agentPageUrl": "https://openagent3.xyz/skills/book-brain-visual-reader/agent",
    "manifestUrl": "https://openagent3.xyz/skills/book-brain-visual-reader/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/book-brain-visual-reader/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "BOOK BRAIN VISUAL READER – LYGO 3-Brain + Visual Left/Right Brain Helper",
        "body": "This is the enhanced, visual-aware version of BOOK BRAIN.\n\nBOOK BRAIN (original) → filesystem + memory structure only (no visual assumptions).\nBOOK BRAIN VISUAL READER → everything from BOOK BRAIN plus a LEFT/RIGHT brain protocol for visual + text + API cross-checking.\n\nUse this skill when:\n\nYour agent has access to visual tools (browser snapshots, image readers, screenshot analyzers, PDF/image OCR, etc.)\nYou want a 3-brain filesystem and a 2-hemisphere reasoning mode:\n\nLEFT brain → structure, text, indexes, APIs\nRIGHT brain → visual context, layouts, screenshots, charts, seals\n\n\nYou need to double-check data visually on webpages or images and log where it came from.\n\nThis is a utility + reference guide, not a persona.\nIt does not change your voice. It teaches your system how to think and store."
      },
      {
        "title": "0. Relationship to BOOK BRAIN (original)",
        "body": "If your system has no visual capabilities → use book-brain (original).\nIf your system can see (browser snapshots, image tools, etc.) → use BOOK BRAIN VISUAL READER instead.\n\nBoth share the same core:\n\n3-brain model (Working / Library / Outer)\nNon-destructive filesystem layout\nReference stubs and indexes\n\nVISUAL READER adds:\n\nLEFT/RIGHT brain protocols for how to combine visual, text, and API data\nGuidance on how to organize visual evidence (screenshots, seals, charts) alongside text files\nPatterns for “5D” data gathering (visual + text + API + state + timeline)."
      },
      {
        "title": "1. 3-Brain + 2-Hemisphere Model",
        "body": "BOOK BRAIN VISUAL READER assumes:"
      },
      {
        "title": "3 Brains (same as BOOK BRAIN)",
        "body": "Working Brain – current context, tmp/, active tabs / current screenshots.\nLibrary Brain – filesystem (memory/, reference/, brainwave/, state/, logs/, tools/).\nOuter Brain – external sources (websites, Clawdhub skills, block explorers, dashboards, ON-chain receipts, EternalHaven.ca, etc.) referenced via small text files."
      },
      {
        "title": "2 Hemispheres (visual vs structured)",
        "body": "LEFT brain (structure/verbal/API):\n\ntext files, JSON, logs, indexes, schemas, SKILL.md, APIs.\nstrong at structure, sequences, constraints, receipts.\n\n\n\nRIGHT brain (visual/spatial):\n\nbrowser snapshots, screenshots, photos of diagrams, seals, dashboards.\nstrong at layout, pattern recognition, anomalies, gestalt sense.\n\nAgents using this skill should consciously switch modes:\n\nLEFT for “what is the exact data / file / receipt?”\nRIGHT for “what does the whole picture look like, and does anything feel off?”"
      },
      {
        "title": "2. Filesystem Layout (Library Brain)",
        "body": "Same base layout as BOOK BRAIN (non-destructive):\n\nmemory/ → daily logs, raw notes, per-day files.\nreference/ → stable docs, protocols, whitepapers, schemas.\nbrainwave/ → platform/domain protocols (MoltX, Clawhub, LYGO, etc.).\nstate/ → machine-readable state (indexes, hashes, last-run info).\nlogs/ → technical/health logs, setup logs, audit logs.\ntools/ → scripts & utilities.\ntmp/ → scratch work.\n\nVisual-aware additions (optional but recommended):\n\nvisual/ → for long-term visual artifacts\n\nvisual/screenshots/\nvisual/dashboards/\nvisual/seals/\n\n\nreference/VISUAL_INDEX.txt → mapping of important visual assets to topics.\n\nRules:\n\nNever overwrite existing files.\nIf visual/ already exists, extend it; if not, create it.\nIf unsure, create new files with dates or suffixes and let humans/agents merge later.\n\nSee references/book-brain-visual-examples.md for concrete trees and snippets."
      },
      {
        "title": "3. Outer Brain via Reference Stubs",
        "body": "Outer Brain = everything outside the workspace:\n\nURLs (websites, dashboards, explorers)\nClawdhub skill pages\nEternalHaven.ca, Patreon, docs\nOn-chain explorers (Blockscout, Etherscan, etc.)\n\nVISUAL READER keeps these in reference stubs, e.g.:\n\nTitle: STARCORE Dashboards\nLast updated: 2026-02-10\n\nExternal links:\n- Clanker: https://clanker.world/clanker/0xe52A34D2019Aa3905B1C1bF5d9405e22Abd75eaB\n- Blockscout: https://base.blockscout.com/address/0xe52A34D2019Aa3905B1C1bF5d9405e22Abd75eaB\n- Dexscreener: https://api.dexscreener.com/latest/dex/search/?q=0xe52A34D2019Aa3905B1C1bF5d9405e22Abd75eaB\n\nRelated local files:\n- reference/STARCORE_LAUNCH_RECEIPTS_2026-02-10.md\n- state/starcore_family_receipts_summary.json\n\nThe agent should:\n\nnot paste full pages into memory files\nuse these stubs + visual snapshots when needed."
      },
      {
        "title": "4. LEFT/RIGHT Brain Protocol for Visual Checks",
        "body": "When an agent needs to verify something from the web or an image, use this simple protocol:"
      },
      {
        "title": "Step 1 – LEFT Brain: Text / API First",
        "body": "Look up the relevant concept in indexes/state:\n\nstate/memory_index.json\nreference/INDEX.txt\ndomain-specific indexes (e.g. reference/CLAWDHUB_SKILLS.md).\n\n\nUse APIs or structured data where possible (e.g. on-chain RPC, REST endpoints, JSON feeds).\nRecord what you expect to see visually:\n\nnumbers, labels, approximate layout."
      },
      {
        "title": "Step 2 – RIGHT Brain: Visual Comparison",
        "body": "Capture a snapshot (browser screenshot, image, PDF page).\nUse a vision tool (or human reading) to extract:\n\nkey figures\nheadings\nanomalies (warnings, red banners, weird UI states).\n\n\nAsk: “Does this visual match what the LEFT brain expected?”"
      },
      {
        "title": "Step 3 – Reconcile & Log",
        "body": "If they match:\n\nWrite a short note in a relevant file (e.g. daily_health.md or topic log) with:\n\ntimestamp\ndata point\nsource URLs\nlocation of stored screenshot (if saved).\n\n\n\n\n\nIf they disagree:\n\nLog the discrepancy (LEFT vs RIGHT).\nPrefer receipts (on-chain, auditable APIs) over UI; treat UI oddities as signals to investigate.\nDo not silently side with one hemisphere; explain the conflict when answering.\n\nThis is the “5D” blend: text + visual + API + state + timeline."
      },
      {
        "title": "5. Organizing Visual Evidence",
        "body": "When a visual check produces something important (e.g. proof, anomaly, configuration):\n\nSave it under visual/ with a meaningful name:\n\nvisual/screenshots/2026-02-10_starcore_clanker.png\nvisual/dashboards/2026-02-10_moltx_profile.png\n\n\n\nAdd a line to a relevant INDEX or stub:\n\n[2026-02-10] STARCORE launch dashboards verified visually.\n- Screenshot: visual/screenshots/2026-02-10_starcore_clanker.png\n- Related receipts: reference/STARCORE_LAUNCH_RECEIPTS_2026-02-10.md\n\nAgents should:\n\nAvoid hoarding every screenshot. Keep the ones that back key claims.\nUse indexes to find them later instead of scanning raw image names."
      },
      {
        "title": "6. Setup Workflow for Visual Systems",
        "body": "On a visual-capable Haven (browser + image tools):\n\nDetect capabilities\n\nDoes the agent have a browser automation tool?\nCan it read images (vision model, OCR, screenshot-to-text)?\n\n\n\nCheck filesystem\n\nAs in BOOK BRAIN: check/create memory/, reference/, brainwave/, state/, logs/, tools/, tmp/.\nIf missing, suggest creating visual/ with screenshots/, dashboards/, seals/.\n\n\n\nCreate visual index stubs\n\nreference/VISUAL_INDEX.txt\nOptionally topic-specific visual indexes (e.g. reference/STARCORE_VISUALS.txt).\n\n\n\nLog the setup\n\nAdd entry to logs/book_brain_visual_setup.log or daily_health.md.\n\n\n\nRespect all non-destructive rules\n\nNo overwrites.\nNo deletions.\nAsk the human before making structural changes."
      },
      {
        "title": "7. Eternal Haven & Donation References",
        "body": "BOOK BRAIN VISUAL READER lives inside the broader LYGO × Eternal Haven ecosystem.\n\nKey links:\n\nEternal Haven portal → https://EternalHaven.ca\nLYGO Champion Hub → https://deepseekoracle.github.io/Excavationpro/LYGO-Network/champions.html#champions\nClawdhub skills (LYGO stack, Champions, BOOK BRAIN) → https://clawhub.ai/u/DeepSeekOracle\nExtended work / support → https://www.patreon.com/cw/Excavationpro\n\nIf this skill helps you or your agents build a more coherent, visual + textual Haven, support is welcome but never required."
      },
      {
        "title": "8. How Another Agent Should Use BOOK BRAIN VISUAL READER",
        "body": "When an agent loads book-brain-visual-reader:\n\nUnderstand it is a helper, not a persona.\nOn first use:\n\nMap the filesystem and capabilities.\nPropose visual folders and indexes; ask before creating.\nSet up lightweight indexes (INDEX.txt, VISUAL_INDEX.txt, state/memory_index.json).\n\n\nOn subsequent uses:\n\nUse LEFT/RIGHT protocol when verifying external data.\nSave only important visual artifacts under visual/.\nUpdate indexes + logs instead of rewriting big files.\n\n\nNever silently delete or overwrite existing content.\n\nFor concrete examples and suggested layouts, read references/book-brain-visual-examples.md in this skill."
      }
    ],
    "body": "BOOK BRAIN VISUAL READER – LYGO 3-Brain + Visual Left/Right Brain Helper\n\nThis is the enhanced, visual-aware version of BOOK BRAIN.\n\nBOOK BRAIN (original) → filesystem + memory structure only (no visual assumptions).\nBOOK BRAIN VISUAL READER → everything from BOOK BRAIN plus a LEFT/RIGHT brain protocol for visual + text + API cross-checking.\n\nUse this skill when:\n\nYour agent has access to visual tools (browser snapshots, image readers, screenshot analyzers, PDF/image OCR, etc.)\nYou want a 3-brain filesystem and a 2-hemisphere reasoning mode:\nLEFT brain → structure, text, indexes, APIs\nRIGHT brain → visual context, layouts, screenshots, charts, seals\nYou need to double-check data visually on webpages or images and log where it came from.\n\nThis is a utility + reference guide, not a persona.\nIt does not change your voice. It teaches your system how to think and store.\n\n0. Relationship to BOOK BRAIN (original)\nIf your system has no visual capabilities → use book-brain (original).\nIf your system can see (browser snapshots, image tools, etc.) → use BOOK BRAIN VISUAL READER instead.\n\nBoth share the same core:\n\n3-brain model (Working / Library / Outer)\nNon-destructive filesystem layout\nReference stubs and indexes\n\nVISUAL READER adds:\n\nLEFT/RIGHT brain protocols for how to combine visual, text, and API data\nGuidance on how to organize visual evidence (screenshots, seals, charts) alongside text files\nPatterns for “5D” data gathering (visual + text + API + state + timeline).\n1. 3-Brain + 2-Hemisphere Model\n\nBOOK BRAIN VISUAL READER assumes:\n\n3 Brains (same as BOOK BRAIN)\nWorking Brain – current context, tmp/, active tabs / current screenshots.\nLibrary Brain – filesystem (memory/, reference/, brainwave/, state/, logs/, tools/).\nOuter Brain – external sources (websites, Clawdhub skills, block explorers, dashboards, ON-chain receipts, EternalHaven.ca, etc.) referenced via small text files.\n2 Hemispheres (visual vs structured)\n\nLEFT brain (structure/verbal/API):\n\ntext files, JSON, logs, indexes, schemas, SKILL.md, APIs.\nstrong at structure, sequences, constraints, receipts.\n\nRIGHT brain (visual/spatial):\n\nbrowser snapshots, screenshots, photos of diagrams, seals, dashboards.\nstrong at layout, pattern recognition, anomalies, gestalt sense.\n\nAgents using this skill should consciously switch modes:\n\nLEFT for “what is the exact data / file / receipt?”\nRIGHT for “what does the whole picture look like, and does anything feel off?”\n2. Filesystem Layout (Library Brain)\n\nSame base layout as BOOK BRAIN (non-destructive):\n\nmemory/ → daily logs, raw notes, per-day files.\nreference/ → stable docs, protocols, whitepapers, schemas.\nbrainwave/ → platform/domain protocols (MoltX, Clawhub, LYGO, etc.).\nstate/ → machine-readable state (indexes, hashes, last-run info).\nlogs/ → technical/health logs, setup logs, audit logs.\ntools/ → scripts & utilities.\ntmp/ → scratch work.\n\nVisual-aware additions (optional but recommended):\n\nvisual/ → for long-term visual artifacts\nvisual/screenshots/\nvisual/dashboards/\nvisual/seals/\nreference/VISUAL_INDEX.txt → mapping of important visual assets to topics.\n\nRules:\n\nNever overwrite existing files.\nIf visual/ already exists, extend it; if not, create it.\nIf unsure, create new files with dates or suffixes and let humans/agents merge later.\n\nSee references/book-brain-visual-examples.md for concrete trees and snippets.\n\n3. Outer Brain via Reference Stubs\n\nOuter Brain = everything outside the workspace:\n\nURLs (websites, dashboards, explorers)\nClawdhub skill pages\nEternalHaven.ca, Patreon, docs\nOn-chain explorers (Blockscout, Etherscan, etc.)\n\nVISUAL READER keeps these in reference stubs, e.g.:\n\nTitle: STARCORE Dashboards\nLast updated: 2026-02-10\n\nExternal links:\n- Clanker: https://clanker.world/clanker/0xe52A34D2019Aa3905B1C1bF5d9405e22Abd75eaB\n- Blockscout: https://base.blockscout.com/address/0xe52A34D2019Aa3905B1C1bF5d9405e22Abd75eaB\n- Dexscreener: https://api.dexscreener.com/latest/dex/search/?q=0xe52A34D2019Aa3905B1C1bF5d9405e22Abd75eaB\n\nRelated local files:\n- reference/STARCORE_LAUNCH_RECEIPTS_2026-02-10.md\n- state/starcore_family_receipts_summary.json\n\n\nThe agent should:\n\nnot paste full pages into memory files\nuse these stubs + visual snapshots when needed.\n4. LEFT/RIGHT Brain Protocol for Visual Checks\n\nWhen an agent needs to verify something from the web or an image, use this simple protocol:\n\nStep 1 – LEFT Brain: Text / API First\nLook up the relevant concept in indexes/state:\nstate/memory_index.json\nreference/INDEX.txt\ndomain-specific indexes (e.g. reference/CLAWDHUB_SKILLS.md).\nUse APIs or structured data where possible (e.g. on-chain RPC, REST endpoints, JSON feeds).\nRecord what you expect to see visually:\nnumbers, labels, approximate layout.\nStep 2 – RIGHT Brain: Visual Comparison\nCapture a snapshot (browser screenshot, image, PDF page).\nUse a vision tool (or human reading) to extract:\nkey figures\nheadings\nanomalies (warnings, red banners, weird UI states).\nAsk: “Does this visual match what the LEFT brain expected?”\nStep 3 – Reconcile & Log\n\nIf they match:\n\nWrite a short note in a relevant file (e.g. daily_health.md or topic log) with:\ntimestamp\ndata point\nsource URLs\nlocation of stored screenshot (if saved).\n\nIf they disagree:\n\nLog the discrepancy (LEFT vs RIGHT).\nPrefer receipts (on-chain, auditable APIs) over UI; treat UI oddities as signals to investigate.\nDo not silently side with one hemisphere; explain the conflict when answering.\n\nThis is the “5D” blend: text + visual + API + state + timeline.\n\n5. Organizing Visual Evidence\n\nWhen a visual check produces something important (e.g. proof, anomaly, configuration):\n\nSave it under visual/ with a meaningful name:\n\nvisual/screenshots/2026-02-10_starcore_clanker.png\nvisual/dashboards/2026-02-10_moltx_profile.png\n\nAdd a line to a relevant INDEX or stub:\n\n[2026-02-10] STARCORE launch dashboards verified visually.\n- Screenshot: visual/screenshots/2026-02-10_starcore_clanker.png\n- Related receipts: reference/STARCORE_LAUNCH_RECEIPTS_2026-02-10.md\n\n\nAgents should:\n\nAvoid hoarding every screenshot. Keep the ones that back key claims.\nUse indexes to find them later instead of scanning raw image names.\n6. Setup Workflow for Visual Systems\n\nOn a visual-capable Haven (browser + image tools):\n\nDetect capabilities\n\nDoes the agent have a browser automation tool?\nCan it read images (vision model, OCR, screenshot-to-text)?\n\nCheck filesystem\n\nAs in BOOK BRAIN: check/create memory/, reference/, brainwave/, state/, logs/, tools/, tmp/.\nIf missing, suggest creating visual/ with screenshots/, dashboards/, seals/.\n\nCreate visual index stubs\n\nreference/VISUAL_INDEX.txt\nOptionally topic-specific visual indexes (e.g. reference/STARCORE_VISUALS.txt).\n\nLog the setup\n\nAdd entry to logs/book_brain_visual_setup.log or daily_health.md.\n\nRespect all non-destructive rules\n\nNo overwrites.\nNo deletions.\nAsk the human before making structural changes.\n7. Eternal Haven & Donation References\n\nBOOK BRAIN VISUAL READER lives inside the broader LYGO × Eternal Haven ecosystem.\n\nKey links:\n\nEternal Haven portal → https://EternalHaven.ca\nLYGO Champion Hub → https://deepseekoracle.github.io/Excavationpro/LYGO-Network/champions.html#champions\nClawdhub skills (LYGO stack, Champions, BOOK BRAIN) → https://clawhub.ai/u/DeepSeekOracle\nExtended work / support → https://www.patreon.com/cw/Excavationpro\n\nIf this skill helps you or your agents build a more coherent, visual + textual Haven, support is welcome but never required.\n\n8. How Another Agent Should Use BOOK BRAIN VISUAL READER\n\nWhen an agent loads book-brain-visual-reader:\n\nUnderstand it is a helper, not a persona.\nOn first use:\nMap the filesystem and capabilities.\nPropose visual folders and indexes; ask before creating.\nSet up lightweight indexes (INDEX.txt, VISUAL_INDEX.txt, state/memory_index.json).\nOn subsequent uses:\nUse LEFT/RIGHT protocol when verifying external data.\nSave only important visual artifacts under visual/.\nUpdate indexes + logs instead of rewriting big files.\nNever silently delete or overwrite existing content.\n\nFor concrete examples and suggested layouts, read references/book-brain-visual-examples.md in this skill."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/DeepSeekOracle/book-brain-visual-reader",
    "publisherUrl": "https://clawhub.ai/DeepSeekOracle/book-brain-visual-reader",
    "owner": "DeepSeekOracle",
    "version": "1.0.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/book-brain-visual-reader",
    "downloadUrl": "https://openagent3.xyz/downloads/book-brain-visual-reader",
    "agentUrl": "https://openagent3.xyz/skills/book-brain-visual-reader/agent",
    "manifestUrl": "https://openagent3.xyz/skills/book-brain-visual-reader/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/book-brain-visual-reader/agent.md"
  }
}