{
  "schemaVersion": "1.0",
  "item": {
    "slug": "fabrik-codek",
    "name": "Fabrik Codek",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/ikchain/fabrik-codek",
    "canonicalUrl": "https://clawhub.ai/ikchain/fabrik-codek",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadMode": "redirect",
    "downloadUrl": "/downloads/fabrik-codek",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=fabrik-codek",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "installMethod": "Manual import",
    "extraction": "Extract archive",
    "prerequisites": [
      "OpenClaw"
    ],
    "packageFormat": "ZIP package",
    "includedAssets": [
      "SKILL.md"
    ],
    "primaryDoc": "SKILL.md",
    "quickSetup": [
      "Download the package from Yavira.",
      "Extract the archive and review SKILL.md first.",
      "Import or place the package into your OpenClaw setup."
    ],
    "agentAssist": {
      "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
      "steps": [
        "Download the package from Yavira.",
        "Extract it into a folder your agent can access.",
        "Paste one of the prompts below and point your agent at the extracted folder."
      ],
      "prompts": [
        {
          "label": "New install",
          "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
        },
        {
          "label": "Upgrade existing",
          "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
        }
      ]
    },
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/fabrik-codek"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    },
    "downloadPageUrl": "https://openagent3.xyz/downloads/fabrik-codek",
    "agentPageUrl": "https://openagent3.xyz/skills/fabrik-codek/agent",
    "manifestUrl": "https://openagent3.xyz/skills/fabrik-codek/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/fabrik-codek/agent.md"
  },
  "agentAssist": {
    "summary": "Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.",
    "steps": [
      "Download the package from Yavira.",
      "Extract it into a folder your agent can access.",
      "Paste one of the prompts below and point your agent at the extracted folder."
    ],
    "prompts": [
      {
        "label": "New install",
        "body": "I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete."
      },
      {
        "label": "Upgrade existing",
        "body": "I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run."
      }
    ]
  },
  "documentation": {
    "source": "clawhub",
    "primaryDoc": "SKILL.md",
    "sections": [
      {
        "title": "Fabrik-Codek",
        "body": "A 7B model that knows you is worth more than a 400B that doesn't.\n\nFabrik-Codek is a personal cognitive architecture that runs locally with any Ollama model. It doesn't just retrieve documents — it builds a knowledge graph from how you work, measures your expertise per topic, routes tasks to the right model with the right retrieval strategy, observes whether its responses actually helped, and refines itself over time."
      },
      {
        "title": "How It Works",
        "body": "You work — Fabrik-Codek captures code changes, session transcripts, decisions, and learnings in a local datalake\nKnowledge extraction — An 11-step pipeline extracts entities and relationships into a knowledge graph alongside a vector DB\nPersonal profiling — Analyzes your datalake to learn your domain, stack, patterns, and tooling preferences\nCompetence scoring — Measures how deep your knowledge is per topic (Expert / Competent / Novice / Unknown)\nAdaptive routing — Classifies each query by task type and topic, selects the right model, adapts retrieval depth, and builds a 3-layer system prompt\nOutcome tracking — Infers whether responses were useful from conversational patterns (zero friction, no manual feedback)\nSelf-correction — Adjusts retrieval parameters for underperforming task/topic combinations\n\nEvery interaction feeds back into the system. Fabrik-Codek itself makes zero outbound network requests — it only connects to Ollama and optionally Meilisearch on localhost. Model downloads are handled by Ollama's own CLI (ollama pull), not by Fabrik-Codek."
      },
      {
        "title": "Setup",
        "body": "Configure as an MCP server in your openclaw.json or ~/.claude/settings.json:\n\n{\n  \"mcpServers\": {\n    \"fabrik-codek\": {\n      \"command\": \"fabrik\",\n      \"args\": [\"mcp\"]\n    }\n  }\n}\n\nFor network access (SSE transport):\n\n{\n  \"mcpServers\": {\n    \"fabrik-codek\": {\n      \"command\": \"fabrik\",\n      \"args\": [\"mcp\", \"--transport\", \"sse\", \"--port\", \"8421\"]\n    }\n  }\n}"
      },
      {
        "title": "First Run",
        "body": "After installing, initialize and build the knowledge base:\n\nfabrik init                              # Set up config, download models\nfabrik graph build --include-transcripts  # Build knowledge graph from sessions\nfabrik rag index                         # Index datalake into vector DB\nfabrik profile build                     # Build your personal profile\nfabrik competence build                  # Build competence map"
      },
      {
        "title": "fabrik_ask",
        "body": "Ask a question to the local LLM with optional context from the knowledge base. The Task Router automatically classifies your query, selects the right model based on your competence, adapts retrieval strategy, and builds a personalized system prompt.\n\nuse_rag=true — vector search context\nuse_graph=true — hybrid context (vector + graph + full-text)\n\nExample: \"How should I handle database connection pooling?\""
      },
      {
        "title": "fabrik_search",
        "body": "Semantic vector search across your accumulated knowledge. Returns the most relevant documents, patterns, and examples by meaning — not just keywords.\n\nExample: \"Find examples of retry logic with exponential backoff\""
      },
      {
        "title": "fabrik_graph_search",
        "body": "Traverse the knowledge graph to find entities (technologies, patterns, strategies) and their relationships. Useful for understanding how concepts connect in your experience.\n\ndepth — how many hops to traverse (default: 2)\n\nExample: \"What technologies are related to FastAPI in my knowledge graph?\""
      },
      {
        "title": "fabrik_fulltext_search",
        "body": "Full-text keyword search via Meilisearch. Use this for exact keyword or phrase matching when you know the specific terms. Optional — the system works without Meilisearch installed.\n\nExample: \"Search for 'EXPLAIN ANALYZE' in my knowledge base\""
      },
      {
        "title": "fabrik_graph_stats",
        "body": "Knowledge graph statistics: entity count, edge count, connected components, type breakdown, and relation types."
      },
      {
        "title": "fabrik_status",
        "body": "System health check: Ollama availability, RAG engine, knowledge graph, full-text search, and datalake status."
      },
      {
        "title": "Available MCP Resources",
        "body": "URIDescriptionfabrik://statusSystem component statusfabrik://graph/statsKnowledge graph statisticsfabrik://configCurrent configuration (sanitized)"
      },
      {
        "title": "When to Use Each Tool",
        "body": "ScenarioToolWhyCoding question needing contextfabrik_ask with use_graph=trueGets hybrid retrieval + personalized promptFind similar patterns or examplesfabrik_searchSemantic similarity across all knowledgeUnderstand how concepts relatefabrik_graph_searchGraph traversal shows entity relationshipsFind exact terms or phrasesfabrik_fulltext_searchBM25 keyword matchingCheck if knowledge base is healthyfabrik_statusComponent health checkUnderstand knowledge distributionfabrik_graph_statsEntity/edge counts and types"
      },
      {
        "title": "The Cognitive Loop",
        "body": "The system gets smarter the more you use it:\n\nYou work → Flywheel captures it → Pipeline extracts knowledge\n    ↑                                        ↓\nStrategy Optimizer ← Outcome Tracker ← LLM responds with context\n    ↓                                        ↑\n    └──── adjusts retrieval ──→ Task Router ─┘\n                                    ↓\n                  Profile + Competence + task-specific prompt\n\nPersonal Profile learns your domain, stack, and preferences from your datalake\nCompetence Model scores expertise per topic using 4 signals (entry count, graph density, recency, outcome rate)\nTask Router classifies queries into 7 task types, detects topic, selects model, adapts retrieval\nOutcome Tracker infers response quality from conversational patterns (topic change = accepted, reformulation = rejected)\nStrategy Optimizer adjusts retrieval parameters for weak spots\nGraph Temporal Decay fades stale knowledge, reinforces recent activity\nSemantic Drift Detection alerts when an entity's context shifts between graph builds\nContext Gate decides whether to inject RAG context at all (skips for generic queries where context would be noise)\nRelevance Filter drops retrieved chunks with low query-text token overlap, preventing domain-specific knowledge from contaminating generic answers"
      },
      {
        "title": "Requirements",
        "body": "Fabrik-Codek installed from source (git clone + pip install -e \".[dev]\")\nOllama running locally with any model (e.g., ollama pull qwen2.5-coder:7b)\nOptional: Meilisearch for full-text search (system works without it)\n\nNote on installation: Fabrik-Codek is an instruction-only skill — there is no automated installer. You install it manually from the GitHub repository via git clone + pip install -e \".[dev]\". This lets you audit the full source code before installing. The skill itself contains documentation and MCP server configuration, not executable code."
      },
      {
        "title": "No external network calls",
        "body": "Fabrik-Codek makes zero outbound network requests. It connects only to services running on your own machine:\n\nOllama at localhost:11434 — your locally running LLM server (for inference and embeddings)\nMeilisearch at localhost:7700 (optional) — your locally running search engine\n\nNo telemetry, no analytics, no phone-home. Verify in the source: grep -r \"requests\\.\\|httpx\\.\\|urllib\" src/ — all HTTP calls target localhost only. The only network activity that occurs during setup is ollama pull, which is Ollama's own CLI downloading models from ollama.ai/library — Fabrik-Codek does not initiate or control these downloads."
      },
      {
        "title": "What fabrik init does",
        "body": "fabrik init performs these local-only operations:\n\nChecks Python version (>= 3.11)\nDetects if Ollama is running at localhost:11434\nCreates a .env config file in the current directory\nCreates local data directories (./data/embeddings/, ./data/graphdb/, ./data/profile/)\nPulls Ollama models via ollama pull — models are downloaded by Ollama itself from ollama.ai/library, not by Fabrik-Codek\n\nFabrik-Codek does not download any files from any server. Model downloads are handled entirely by Ollama's own CLI."
      },
      {
        "title": "Data access scope",
        "body": "Reads (all local, all opt-in, never automatic):\n\nPathWhatWhenWhy~/.claude/projects/*/Session transcript JSONL files (already on disk from Claude Code)Only when you explicitly run fabrik learn process or fabrik graph build --include-transcriptsExtracts entities and reasoning patterns to build the knowledge graph. This path is NOT in configPaths because Fabrik-Codek does not write to it — it is read-only and user-initiated../data/ or FABRIK_DATALAKE_PATHYour datalake (training pairs, captures, metadata)During graph build, rag index, profile build, competence buildSource data for building the knowledge base and personal profile\n\nWrites (all local):\n\nPathWhat./data/embeddings/LanceDB vector index./data/graphdb/NetworkX knowledge graph (JSON)./data/profile/Personal profile, competence map, strategy overrides (JSON)./data/01-raw/outcomes/Outcome tracking records (JSONL)\n\nAll paths are declared in the skill metadata configPaths. The skill never writes outside these directories."
      },
      {
        "title": "Network transport",
        "body": "Default: stdio — no network listener, no ports opened, no exposure\nOptional: sse — starts an HTTP server bound to 127.0.0.1:8421 by default (localhost only, not reachable from other machines)\nIf you change the SSE bind address to 0.0.0.0, your indexed data would be accessible over the network. Do not do this without proper firewall/ACL rules"
      },
      {
        "title": "Session transcript privacy",
        "body": "The fabrik learn command reads Claude Code session transcripts, which may contain sensitive data (code, credentials, conversation history). This command is opt-in — you must run it manually. It does not run in the background or on a schedule unless you explicitly configure fabrik learn watch. Review what's in your ~/.claude/projects/ before indexing."
      },
      {
        "title": "Source verification",
        "body": "Fully open source at github.com/ikchain/Fabrik-Codek (MIT license). Clone the repo and audit before installing."
      }
    ],
    "body": "Fabrik-Codek\n\nA 7B model that knows you is worth more than a 400B that doesn't.\n\nFabrik-Codek is a personal cognitive architecture that runs locally with any Ollama model. It doesn't just retrieve documents — it builds a knowledge graph from how you work, measures your expertise per topic, routes tasks to the right model with the right retrieval strategy, observes whether its responses actually helped, and refines itself over time.\n\nHow It Works\nYou work — Fabrik-Codek captures code changes, session transcripts, decisions, and learnings in a local datalake\nKnowledge extraction — An 11-step pipeline extracts entities and relationships into a knowledge graph alongside a vector DB\nPersonal profiling — Analyzes your datalake to learn your domain, stack, patterns, and tooling preferences\nCompetence scoring — Measures how deep your knowledge is per topic (Expert / Competent / Novice / Unknown)\nAdaptive routing — Classifies each query by task type and topic, selects the right model, adapts retrieval depth, and builds a 3-layer system prompt\nOutcome tracking — Infers whether responses were useful from conversational patterns (zero friction, no manual feedback)\nSelf-correction — Adjusts retrieval parameters for underperforming task/topic combinations\n\nEvery interaction feeds back into the system. Fabrik-Codek itself makes zero outbound network requests — it only connects to Ollama and optionally Meilisearch on localhost. Model downloads are handled by Ollama's own CLI (ollama pull), not by Fabrik-Codek.\n\nSetup\n\nConfigure as an MCP server in your openclaw.json or ~/.claude/settings.json:\n\n{\n  \"mcpServers\": {\n    \"fabrik-codek\": {\n      \"command\": \"fabrik\",\n      \"args\": [\"mcp\"]\n    }\n  }\n}\n\n\nFor network access (SSE transport):\n\n{\n  \"mcpServers\": {\n    \"fabrik-codek\": {\n      \"command\": \"fabrik\",\n      \"args\": [\"mcp\", \"--transport\", \"sse\", \"--port\", \"8421\"]\n    }\n  }\n}\n\nFirst Run\n\nAfter installing, initialize and build the knowledge base:\n\nfabrik init                              # Set up config, download models\nfabrik graph build --include-transcripts  # Build knowledge graph from sessions\nfabrik rag index                         # Index datalake into vector DB\nfabrik profile build                     # Build your personal profile\nfabrik competence build                  # Build competence map\n\nAvailable MCP Tools\nfabrik_ask\n\nAsk a question to the local LLM with optional context from the knowledge base. The Task Router automatically classifies your query, selects the right model based on your competence, adapts retrieval strategy, and builds a personalized system prompt.\n\nuse_rag=true — vector search context\nuse_graph=true — hybrid context (vector + graph + full-text)\n\nExample: \"How should I handle database connection pooling?\"\n\nfabrik_search\n\nSemantic vector search across your accumulated knowledge. Returns the most relevant documents, patterns, and examples by meaning — not just keywords.\n\nExample: \"Find examples of retry logic with exponential backoff\"\n\nfabrik_graph_search\n\nTraverse the knowledge graph to find entities (technologies, patterns, strategies) and their relationships. Useful for understanding how concepts connect in your experience.\n\ndepth — how many hops to traverse (default: 2)\n\nExample: \"What technologies are related to FastAPI in my knowledge graph?\"\n\nfabrik_fulltext_search\n\nFull-text keyword search via Meilisearch. Use this for exact keyword or phrase matching when you know the specific terms. Optional — the system works without Meilisearch installed.\n\nExample: \"Search for 'EXPLAIN ANALYZE' in my knowledge base\"\n\nfabrik_graph_stats\n\nKnowledge graph statistics: entity count, edge count, connected components, type breakdown, and relation types.\n\nfabrik_status\n\nSystem health check: Ollama availability, RAG engine, knowledge graph, full-text search, and datalake status.\n\nAvailable MCP Resources\nURI\tDescription\nfabrik://status\tSystem component status\nfabrik://graph/stats\tKnowledge graph statistics\nfabrik://config\tCurrent configuration (sanitized)\nWhen to Use Each Tool\nScenario\tTool\tWhy\nCoding question needing context\tfabrik_ask with use_graph=true\tGets hybrid retrieval + personalized prompt\nFind similar patterns or examples\tfabrik_search\tSemantic similarity across all knowledge\nUnderstand how concepts relate\tfabrik_graph_search\tGraph traversal shows entity relationships\nFind exact terms or phrases\tfabrik_fulltext_search\tBM25 keyword matching\nCheck if knowledge base is healthy\tfabrik_status\tComponent health check\nUnderstand knowledge distribution\tfabrik_graph_stats\tEntity/edge counts and types\nThe Cognitive Loop\n\nThe system gets smarter the more you use it:\n\nYou work → Flywheel captures it → Pipeline extracts knowledge\n    ↑                                        ↓\nStrategy Optimizer ← Outcome Tracker ← LLM responds with context\n    ↓                                        ↑\n    └──── adjusts retrieval ──→ Task Router ─┘\n                                    ↓\n                  Profile + Competence + task-specific prompt\n\nPersonal Profile learns your domain, stack, and preferences from your datalake\nCompetence Model scores expertise per topic using 4 signals (entry count, graph density, recency, outcome rate)\nTask Router classifies queries into 7 task types, detects topic, selects model, adapts retrieval\nOutcome Tracker infers response quality from conversational patterns (topic change = accepted, reformulation = rejected)\nStrategy Optimizer adjusts retrieval parameters for weak spots\nGraph Temporal Decay fades stale knowledge, reinforces recent activity\nSemantic Drift Detection alerts when an entity's context shifts between graph builds\nContext Gate decides whether to inject RAG context at all (skips for generic queries where context would be noise)\nRelevance Filter drops retrieved chunks with low query-text token overlap, preventing domain-specific knowledge from contaminating generic answers\nRequirements\nFabrik-Codek installed from source (git clone + pip install -e \".[dev]\")\nOllama running locally with any model (e.g., ollama pull qwen2.5-coder:7b)\nOptional: Meilisearch for full-text search (system works without it)\n\nNote on installation: Fabrik-Codek is an instruction-only skill — there is no automated installer. You install it manually from the GitHub repository via git clone + pip install -e \".[dev]\". This lets you audit the full source code before installing. The skill itself contains documentation and MCP server configuration, not executable code.\n\nSecurity & Privacy\nNo external network calls\n\nFabrik-Codek makes zero outbound network requests. It connects only to services running on your own machine:\n\nOllama at localhost:11434 — your locally running LLM server (for inference and embeddings)\nMeilisearch at localhost:7700 (optional) — your locally running search engine\n\nNo telemetry, no analytics, no phone-home. Verify in the source: grep -r \"requests\\.\\|httpx\\.\\|urllib\" src/ — all HTTP calls target localhost only. The only network activity that occurs during setup is ollama pull, which is Ollama's own CLI downloading models from ollama.ai/library — Fabrik-Codek does not initiate or control these downloads.\n\nWhat fabrik init does\n\nfabrik init performs these local-only operations:\n\nChecks Python version (>= 3.11)\nDetects if Ollama is running at localhost:11434\nCreates a .env config file in the current directory\nCreates local data directories (./data/embeddings/, ./data/graphdb/, ./data/profile/)\nPulls Ollama models via ollama pull — models are downloaded by Ollama itself from ollama.ai/library, not by Fabrik-Codek\n\nFabrik-Codek does not download any files from any server. Model downloads are handled entirely by Ollama's own CLI.\n\nData access scope\n\nReads (all local, all opt-in, never automatic):\n\nPath\tWhat\tWhen\tWhy\n~/.claude/projects/*/\tSession transcript JSONL files (already on disk from Claude Code)\tOnly when you explicitly run fabrik learn process or fabrik graph build --include-transcripts\tExtracts entities and reasoning patterns to build the knowledge graph. This path is NOT in configPaths because Fabrik-Codek does not write to it — it is read-only and user-initiated.\n./data/ or FABRIK_DATALAKE_PATH\tYour datalake (training pairs, captures, metadata)\tDuring graph build, rag index, profile build, competence build\tSource data for building the knowledge base and personal profile\n\nWrites (all local):\n\nPath\tWhat\n./data/embeddings/\tLanceDB vector index\n./data/graphdb/\tNetworkX knowledge graph (JSON)\n./data/profile/\tPersonal profile, competence map, strategy overrides (JSON)\n./data/01-raw/outcomes/\tOutcome tracking records (JSONL)\n\nAll paths are declared in the skill metadata configPaths. The skill never writes outside these directories.\n\nNetwork transport\nDefault: stdio — no network listener, no ports opened, no exposure\nOptional: sse — starts an HTTP server bound to 127.0.0.1:8421 by default (localhost only, not reachable from other machines)\nIf you change the SSE bind address to 0.0.0.0, your indexed data would be accessible over the network. Do not do this without proper firewall/ACL rules\nSession transcript privacy\n\nThe fabrik learn command reads Claude Code session transcripts, which may contain sensitive data (code, credentials, conversation history). This command is opt-in — you must run it manually. It does not run in the background or on a schedule unless you explicitly configure fabrik learn watch. Review what's in your ~/.claude/projects/ before indexing.\n\nSource verification\n\nFully open source at github.com/ikchain/Fabrik-Codek (MIT license). Clone the repo and audit before installing."
  },
  "trust": {
    "sourceLabel": "tencent",
    "provenanceUrl": "https://clawhub.ai/ikchain/fabrik-codek",
    "publisherUrl": "https://clawhub.ai/ikchain/fabrik-codek",
    "owner": "ikchain",
    "version": "1.10.0",
    "license": null,
    "verificationStatus": "Indexed source record"
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/fabrik-codek",
    "downloadUrl": "https://openagent3.xyz/downloads/fabrik-codek",
    "agentUrl": "https://openagent3.xyz/skills/fabrik-codek/agent",
    "manifestUrl": "https://openagent3.xyz/skills/fabrik-codek/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/fabrik-codek/agent.md"
  }
}