Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Set up and operate a personal knowledge system using Supabase (pgvector) and OpenRouter. Five structured tables — thoughts (inbox log), people, projects, ide...
Set up and operate a personal knowledge system using Supabase (pgvector) and OpenRouter. Five structured tables — thoughts (inbox log), people, projects, ide...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
When intelligence is abundant, context becomes the scarce resource. This skill is context architecture — a persistent, searchable knowledge layer that turns your agent into a personal knowledge manager. Two opinionated primitives: Supabase — your database, and so much more. PostgreSQL + pgvector. Stores thoughts, people, projects, ideas, and tasks as structured data with vector embeddings. REST API built in. Your data, your infrastructure. Models come and go; your context persists. And once you have a Supabase project, you've unlocked the foundation for everything else you'll want to build — the Second Brain is just the beginning. OpenRouter — your AI gateway. One API key, every model. Embeddings and LLM calls for classification and routing. Swap models by changing a string. Future-proof by design. Everything else — how you capture thoughts, how you retrieve them, what you build on top — is application layer. The skill covers the foundation. If the tables don't exist yet, see {baseDir}/references/setup.md
These are the operational concepts behind the system. Understanding them helps you operate correctly. BlockWhat It DoesImplementationDrop BoxOne frictionless capture pointEverything goes to thoughts firstSorterAI classification + routingLLM classifies type, then routes to structured tableFormConsistent data contractsEach table has a defined schemaFiling CabinetSource of truth per categorypeople, projects, ideas, admin tablesBouncerConfidence thresholdconfidence < 0.6 = don't route, stay in inboxReceiptAudit trailthoughts row logs what came in, where it wentTap on the ShoulderProactive surfacingDaily digest queries (application layer)Fix ButtonAgent-mediated correctionsMove records between tables on user request Full conceptual framework: {baseDir}/references/concepts.md
TableRoleKey FieldsthoughtsInbox Log / audit trailcontent, embedding, metadata (type, topics, people, confidence, routed_to)peopleRelationship trackingname (unique), context, follow_ups, tags, embeddingprojectsWork trackingname, status, next_action, notes, tags, embeddingideasInsight capturetitle, summary, elaboration, topics, embeddingadminTask managementname, due_date, status, notes, embedding Every table has semantic search via its own match_* function. Cross-table search via search_all.
When a thought is classified: TypeRouteActionperson_notepeopleUpsert: create person or append to existing contexttaskadminInsert new task (status=pending)ideaideasInsert new ideaobservationnoneStays in thoughts onlyreferencenoneStays in thoughts only If confidence < 0.6, don't route. Leave in thoughts, tell user.
# 1. Embed EMBEDDING=$(curl -s -X POST "https://openrouter.ai/api/v1/embeddings" \ -H "Authorization: Bearer $OPENROUTER_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model": "openai/text-embedding-3-small", "input": "Sarah mentioned she is thinking about leaving her job to start consulting"}' \ | jq -c '.data[0].embedding') # 2. Classify (run in parallel with step 1) METADATA=$(curl -s -X POST "https://openrouter.ai/api/v1/chat/completions" \ -H "Authorization: Bearer $OPENROUTER_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model": "openai/gpt-4o-mini", "response_format": {"type": "json_object"}, "messages": [{"role": "system", "content": "Extract metadata from the captured thought. Return JSON with: type (observation/task/idea/reference/person_note), topics (1-3 tags), people (array), action_items (array), dates_mentioned (array), confidence (0-1), suggested_route (people/projects/ideas/admin/null), extracted_fields (structured data for destination table)."}, {"role": "user", "content": "Sarah mentioned she is thinking about leaving her job to start consulting"}]}' \ | jq -r '.choices[0].message.content') # 3. Store in thoughts (the Receipt) curl -s -X POST "$SUPABASE_URL/rest/v1/thoughts" \ -H "apikey: $SUPABASE_SERVICE_ROLE_KEY" \ -H "Authorization: Bearer $SUPABASE_SERVICE_ROLE_KEY" \ -H "Content-Type: application/json" \ -H "Prefer: return=representation" \ -d "[{\"content\": \"Sarah mentioned she is thinking about leaving her job to start consulting\", \"embedding\": $EMBEDDING, \"metadata\": $METADATA}]" # 4. Route based on classification (if confidence >= 0.6) Full pipeline with routing logic: {baseDir}/references/ingest.md
QUERY_EMBEDDING=$(curl -s -X POST "https://openrouter.ai/api/v1/embeddings" \ -H "Authorization: Bearer $OPENROUTER_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model": "openai/text-embedding-3-small", "input": "career changes"}' \ | jq -c '.data[0].embedding') curl -s -X POST "$SUPABASE_URL/rest/v1/rpc/match_thoughts" \ -H "apikey: $SUPABASE_SERVICE_ROLE_KEY" \ -H "Authorization: Bearer $SUPABASE_SERVICE_ROLE_KEY" \ -H "Content-Type: application/json" \ -d "{\"query_embedding\": $QUERY_EMBEDDING, \"match_threshold\": 0.5, \"match_count\": 10, \"filter\": {}}"
curl -s -X POST "$SUPABASE_URL/rest/v1/rpc/search_all" \ -H "apikey: $SUPABASE_SERVICE_ROLE_KEY" \ -H "Authorization: Bearer $SUPABASE_SERVICE_ROLE_KEY" \ -H "Content-Type: application/json" \ -d "{\"query_embedding\": $QUERY_EMBEDDING, \"match_threshold\": 0.5, \"match_count\": 20}" Returns table_name, record_id, label, detail, similarity, created_at from all tables.
curl -s "$SUPABASE_URL/rest/v1/projects?status=eq.active&select=name,next_action,notes&order=updated_at.desc" \ -H "apikey: $SUPABASE_SERVICE_ROLE_KEY" \ -H "Authorization: Bearer $SUPABASE_SERVICE_ROLE_KEY"
curl -s "$SUPABASE_URL/rest/v1/admin?status=eq.pending&select=name,due_date,notes&order=due_date.asc" \ -H "apikey: $SUPABASE_SERVICE_ROLE_KEY" \ -H "Authorization: Bearer $SUPABASE_SERVICE_ROLE_KEY"
When content arrives from any source: Embed the text via OpenRouter (1536-dim vector) Classify via OpenRouter LLM (type, topics, people, confidence, suggested route) Log in thoughts (the Receipt — always, regardless of routing) Bounce check — if confidence < 0.6, stop here Route to structured table based on type (the Sorter) Confirm to the user what was captured and where it was filed Full pipeline details: {baseDir}/references/ingest.md
Every thought gets classified with: FieldTypeValuestypestringobservation, task, idea, reference, person_notetopicsstring[]1-3 short topic tags (always at least one)peoplestring[]People mentioned (empty if none)action_itemsstring[]Implied to-dos (empty if none)dates_mentionedstring[]Dates in YYYY-MM-DD format (empty if none)sourcestringWhere it came from: slack, signal, cli, manual, etc.confidencefloatLLM classification confidence (0-1). The Bouncer uses this.routed_tostringWhich table the thought was filed into (null if unrouted)routed_idstringUUID of the record in the destination table (null if unrouted)
Conceptual framework: {baseDir}/references/concepts.md First-time setup: {baseDir}/references/setup.md Database schema (SQL): {baseDir}/references/schema.md Ingest pipeline details: {baseDir}/references/ingest.md Retrieval operations: {baseDir}/references/retrieval.md OpenRouter API patterns: {baseDir}/references/openrouter.md
VariableServiceSUPABASE_URLSupabase project REST base URLSUPABASE_SERVICE_ROLE_KEYSupabase auth (full access)OPENROUTER_API_KEYOpenRouter API key
Why service_role key? Supabase provides two keys: anon (public, respects RLS) and service_role (full access, bypasses RLS). This skill uses service_role because: This is a single-user personal knowledge base, not a multi-tenant app Your agent IS the trusted server-side component The RLS policy restricts access to service_role only — the most restrictive option Using the anon key would require loosening RLS to allow anonymous access to your thoughts, which is worse Data sent to OpenRouter: All captured text (thoughts, names, action items) is sent to OpenRouter for embedding and classification. This is inherent to the design — you need AI to understand meaning. Don't capture highly sensitive information unless you accept OpenRouter's data handling policies. Key handling: Store SUPABASE_SERVICE_ROLE_KEY and OPENROUTER_API_KEY securely. Never commit them to public repos. Rotate periodically. In OpenClaw, store them in openclaw.json under skills.entries or as environment variables. Built by Limited Edition Jonathan • natebjones.com
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.