Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Store and query structured data without planning schemas upfront. Use when you need to remember information, track entities across conversations, build knowledge bases, ingest API data, store user preferences, create CRM systems, or maintain any persistent state. Automatically evolves data structure as you discover new fields. No migrations, no schema design - just store data and query it.
Store and query structured data without planning schemas upfront. Use when you need to remember information, track entities across conversations, build knowledge bases, ingest API data, store user preferences, create CRM systems, or maintain any persistent state. Automatically evolves data structure as you discover new fields. No migrations, no schema design - just store data and query it.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Use KameleonDB when you need to: Remember information across conversations - Store contacts, companies, tasks, notes Track entities and relationships - Connect customers to orders, projects to tasks Build knowledge bases - Accumulate facts, documents, insights over time Ingest external data - Store API responses, CSV imports, web scraping results Create custom CRM/PM systems - Track leads, deals, projects, tickets Store user preferences - Remember settings, history, favorites Maintain session state - Persist data between agent runs Query historical data - Search, filter, analyze stored information
No Upfront Planning: Start storing data immediately. Don't know all fields yet? Add them later when you discover them. Learn as You Go: Found a new field in your data? Just add it. Old records won't break. Query Like a Database: Generate SQL queries to search, filter, join, and analyze your data. Self-Optimizing: Get hints when queries slow down, with exact commands to speed them up. Track Your Reasoning: Every schema change records why you made it, building an audit trail.
# Install pip install kameleondb # Set database location (SQLite - no server needed) export KAMELEONDB_URL="sqlite:///./kameleondb.db" # Initialize kameleondb admin init # Done! Start using it. Note: For production, use PostgreSQL instead of SQLite. Set KAMELEONDB_URL="postgresql://user:pass@localhost/dbname" and run pip install kameleondb[postgresql].
To get the most out of KameleonDB as your persistent layer, consider noting your database location and preferred usage patterns in your memory files. This helps you leverage it consistently across sessions. Example notes you might keep: Database URL: sqlite:///path/to/your-memory.db Use for: contacts, tasks, knowledge bases, entity tracking Key commands: schema list, data insert, data list, query run
# Check what exists kameleondb --json schema list # Returns: {"entities": []} # Create Contact tracking kameleondb --json schema create Contact \ --field "name:string:required" \ --field "email:string:unique" # Store someone you met kameleondb --json data insert Contact '{"name":"Alice Johnson","email":"alice@acme.com"}' # Later: found their LinkedIn! kameleondb --json schema alter Contact --add "linkedin_url:string" \ --reason "Found LinkedIn profiles for contacts" # Update Alice's record kameleondb --json data update Contact <id> '{"linkedin_url":"https://linkedin.com/in/alice"}'
# Store facts you learn kameleondb --json schema create Fact \ --field "content:string:required" \ --field "source:string" \ --field "confidence:float" # Add facts kameleondb --json data insert Fact '{"content":"Python 3.11 released Oct 2022","source":"python.org","confidence":1.0}' # Search facts (get SQL context first) kameleondb --json schema context --entity Fact # Use context to generate: SELECT * FROM kdb_records WHERE data->>'content' LIKE '%Python%' # Query kameleondb --json query run "SELECT data->>'content', data->>'source' FROM kdb_records WHERE entity_id='...' LIMIT 10"
# Create task tracker kameleondb --json schema create Task \ --field "title:string:required" \ --field "status:string" \ --field "priority:string" # Add tasks kameleondb --json data insert Task '{"title":"Research OpenClaw","status":"todo","priority":"high"}' # Mark complete kameleondb --json data update Task <id> '{"status":"done"}' # Get all incomplete kameleondb --json query run \ "SELECT data->>'title', data->>'priority' FROM kdb_records WHERE entity_id='...' AND data->>'status' != 'done'"
# Store API responses kameleondb --json schema create GitHubRepo \ --field "name:string:required" \ --field "stars:int" \ --field "url:string" # Batch import from JSONL kameleondb --json data insert GitHubRepo --from-file repos.jsonl --batch # Query top repos kameleondb --json query run \ "SELECT data->>'name', (data->>'stars')::int as stars FROM kdb_records WHERE entity_id='...' ORDER BY stars DESC LIMIT 10"
Don't know all fields upfront? No problem. Add, drop, or rename them when you discover patterns: # Add a new field kameleondb --json schema alter Contact --add "twitter_handle:string" \ --reason "Found Twitter profiles for 30% of contacts" # Drop obsolete fields kameleondb --json schema alter Contact --drop "legacy_field" --force # Do multiple operations at once kameleondb --json schema alter Contact --add "linkedin:string" --drop "old_social" --reason "Consolidating social fields" Old records won't break - they just show null for new fields, and dropped fields are soft-deleted.
Queries tell you when they're slow and how to fix it: { "rows": [...], "suggestions": [{ "priority": "high", "reason": "Query took 450ms with 5000 records", "action": "kameleondb storage materialize Contact" }] } Run that command and future queries will be faster.
Every schema change records why you made it: kameleondb --json admin changelog # See: who added what field, when, and why
Get schema context, generate SQL, execute it: # Get schema to understand structure kameleondb --json schema context --entity Contact # Generate SQL based on structure # Execute with built-in validation kameleondb --json query run "SELECT ... FROM ..."
Add --json to any command for machine-readable output. Schema: list, create, describe, alter, drop, info, context Data: insert, get, update, delete, list, link, unlink, get-linked, info Query: run Storage: status, materialize, dematerialize Admin: init, info, changelog
Instead of separate add-field and drop-field commands, use the unified alter: # Add a field kameleondb --json schema alter Contact --add "phone:string:indexed" # Drop a field kameleondb --json schema alter Contact --drop legacy_field --force # Rename a field kameleondb --json schema alter Contact --rename "old_name:new_name" # Multiple operations at once kameleondb --json schema alter Contact --add "new:string" --drop old --reason "Cleanup"
For many-to-many relationships: # Link a product to tags kameleondb --json data link Product abc123 tags tag-1 kameleondb --json data link Product abc123 tags -t tag-1 -t tag-2 -t tag-3 # Unlink kameleondb --json data unlink Product abc123 tags tag-1 kameleondb --json data unlink Product abc123 tags --all # Get linked records kameleondb --json data get-linked Product abc123 tags Run kameleondb --help or kameleondb <command> --help for details.
# Start simple kameleondb --json schema create Person --field "name:string:required" kameleondb --json data insert Person '{"name":"Alice"}' # Learn more over time kameleondb --json schema alter Person --add "email:string" kameleondb --json schema alter Person --add "company:string" kameleondb --json schema alter Person --add "last_contacted:datetime" # Update as you learn kameleondb --json data update Person <id> '{"email":"alice@example.com","last_contacted":"2026-02-07"}'
# Create generic entity kameleondb --json schema create ScrapedData --field "source:string" --field "raw:json" # Store everything kameleondb --json data insert ScrapedData '{"source":"website.com","raw":{"title":"...","data":{...}}}' # Discover patterns, then structure it kameleondb --json schema alter ScrapedData --add "title:string" kameleondb --json schema alter ScrapedData --add "price:float" # Migrate data progressively as you normalize it
# Start minimal kameleondb --json schema create Task --field "title:string:required" # Add status tracking kameleondb --json schema alter Task --add "status:string" # Add priority later kameleondb --json schema alter Task --add "priority:string" # Add assignee when team grows kameleondb --json schema alter Task --add "assigned_to:string" # Add tags for categorization kameleondb --json schema alter Task --add "tags:json" # Schema grows with your needs - no migrations!
# Create related entities kameleondb --json schema create Project --field "name:string" kameleondb --json schema create Task \ --field "title:string" \ --field "project_id:string" # Get schema context for SQL generation kameleondb --json schema context --entity Project --entity Task # Returns: detailed schema with SQL patterns for JOIN # Generate and execute JOIN query kameleondb --json query run \ "SELECT p.data->>'name' as project, t.data->>'title' as task FROM kdb_records p JOIN kdb_records t ON t.data->>'project_id' = p.id::text WHERE p.entity_id='...' AND t.entity_id='...'"
# Install pip install kameleondb # Configure (SQLite for testing - no server needed) export KAMELEONDB_URL="sqlite:///./kameleondb.db" # Initialize kameleondb admin init # You're ready!
# List all entities kameleondb --json schema list # See entity details kameleondb --json schema describe <entity-name> # View changelog kameleondb --json admin changelog
# Create entity kameleondb --json schema create <Entity> --field "name:type" # Add field kameleondb --json schema alter <Entity> --add "field:type" # Insert data kameleondb --json data insert <Entity> '{"field":"value"}' # Get by ID kameleondb --json data get <Entity> <id> # Update kameleondb --json data update <Entity> <id> '{"field":"new_value"}' # Query with SQL kameleondb --json query run "SELECT ... FROM kdb_records WHERE ..."
Common types: string, int, float, bool, datetime, json Modifiers: required, unique, indexed Examples: "email:string:unique", "score:int:indexed", "tags:json"
Try it: kameleondb admin init โ kameleondb --json schema create Test --field "note:string" โ kameleondb --json data insert Test '{"note":"my first record"}' Real use case: Think about what you need to track (contacts, tasks, facts, etc.) and create an entity for it Evolve it: As you discover new fields, add them with schema alter Query it: Use schema context to understand structure, then query with SQL Optimize it: If queries slow down, follow the hints in query results
GitHub: https://github.com/marcosnataqs/kameleondb Examples: See examples/workflow.md in this skill directory Design Philosophy: Why it's built for agents - FIRST-PRINCIPLES.md
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.