← All skills
Tencent SkillHub · Other

Lite Sqlite

Fast lightweight local SQLite database for OpenClaw agents with minimal RAM and storage usage. Use when creating or managing SQLite databases for storing age...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Fast lightweight local SQLite database for OpenClaw agents with minimal RAM and storage usage. Use when creating or managing SQLite databases for storing age...

⬇ 0 downloads ★ 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
scripts/sqlite_cli.py, scripts/sqlite_connector.py, SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 33 sections Open source page

Lite SQLite - Lightweight Local Database

Ultra-lightweight SQLite database management optimized for OpenClaw agents with minimal RAM (~2-5MB) and storage overhead.

Why SQLite?

✅ Zero setup - No server, no configuration, file-based ✅ Minimal RAM - 2-5MB typical usage ✅ Fast - Millions of queries/second ✅ Portable - Single .db file ✅ Reliable - ACID compliant, crash-proof ✅ Cross-platform - Works everywhere Python works

Core Features

In-memory mode for temporary data (even faster!) WAL mode for concurrent access Connection pooling Automatic schema migration Built-in backup/restore Query optimization hints

Basic Database Operations

from sqlite_connector import SQLiteDB # Create database (auto-wal mode enabled) db = SQLiteDB("agent_data.db") # Create table db.create_table("memos", { "id": "INTEGER PRIMARY KEY AUTOINCREMENT", "title": "TEXT NOT NULL", "content": "TEXT", "created_at": "TEXT DEFAULT CURRENT_TIMESTAMP", "tags": "TEXT" }) # Insert data db.insert("memos", [title="First memo", content="Hello world", tags="test"]) # Query data results = db.query("SELECT * FROM memos WHERE tags = ?", ("test",)) # Update data db.update("memos", "id = ?", [content="Updated content"], (1,)) # Delete data db.delete("memos", "id = ?", (1,)) # Close connection db.close()

In-Memory Database (Fastest)

# Fastest mode - RAM only, no disk I/O db = SQLiteDB(":memory:") # Perfect for temporary operations db.create_table("temp", {...}) # Data persists only during session # Use for caching, computations, temporary storage

Essential Settings

import sqlite3 # WAL mode (Write-Ahead Logging) - 3-4x faster conn = sqlite3.connect("agent_data.db") conn.execute("PRAGMA journal_mode=WAL") # Sync OFF (faster writes, crash-safe with proper shutdown) conn.execute("PRAGMA synchronous=NORMAL") # Memory optimization conn.execute("PRAGMA cache_size=-64000") # 64MB cache conn.execute("PRAGMA page_size=4096") # Temp store in RAM conn.execute("PRAGMA temp_store=MEMORY")

Query Optimization

# Use indexes for frequent queries db.create_index("memos", "tags") db.create_index("memos", "created_at") # Use prepared statements (automatic in our wrapper) db.query("SELECT * FROM memos WHERE id = ?", (id,)) # Batch inserts for large datasets db.batch_insert("memos", rows_data)

Agent Memo Schema (Memory Store)

db.create_table("agent_memos", { "id": "INTEGER PRIMARY KEY AUTOINCREMENT", "agent_id": "TEXT NOT NULL", # Which agent created it "key": "TEXT NOT NULL", # Lookup key "value": "TEXT", # Stored value "priority": "INTEGER DEFAULT 0", # For retrieval ordering "created_at": "TEXT DEFAULT CURRENT_TIMESTAMP", "expires_at": "TEXT" # Optional TTL }) # Create indexes db.create_index("agent_memos", "agent_id") db.create_index("agent_memos", "key") db.create_index("agent_memos", "expires_at")

Session Log Schema

db.create_table("session_logs", { "id": "INTEGER PRIMARY KEY AUTOINCREMENT", "session_id": "TEXT NOT NULL", "agent": "TEXT NOT NULL", "message": "TEXT", "metadata": "TEXT", # JSON "created_at": "TEXT DEFAULT CURRENT_TIMESTAMP" }) db.create_index("session_logs", "session_id") db.create_index("session_logs", "created_at")

Cache Schema (TTL-based)

db.create_table("cache", { "id": "INTEGER PRIMARY KEY AUTOINCREMENT", "key": "TEXT UNIQUE NOT NULL", "value": "BLOB", # Supports binary data "created_at": "TEXT DEFAULT CURRENT_TIMESTAMP", "expires_at": "TEXT NOT NULL" }) # Auto-cleanup expired entries db.query("DELETE FROM cache WHERE expires_at < ?", (datetime.now().isoformat(),)) db.create_index("cache", "key") db.create_index("cache", "expires_at")

Connection Pooling

from sqlite_connector import ConnectionPool # Pool of connections for concurrent access pool = ConnectionPool("agent_data.db", max_connections=5) # Get connection conn = pool.get_connection() # Use conn... pool.release_connection(conn)

Automatic Backup

# Backup database db.backup("agent_data_backup.db") # Automatic daily backup db.auto_backup("backups/", "daily")

Schema Migration

# Add column if not exists db.add_column("memos", "updated_at", "TEXT DEFAULT CURRENT_TIMESTAMP") # Migrate data db.migrate("memos", { "old_column": "new_column" })

Typical Performance

OperationRowsTime (In-Memory)Time (Disk)Insert10,0000.05s0.3sSelect (indexed)10,0000.001s0.01sSelect (full scan)10,0000.05s0.5sUpdate1,0000.01s0.1sDelete1,0000.01s0.1s

Memory Usage

Base Memory: 2-5MB With 100K rows: ~10-15MB With 1M rows: ~50-100MB In-memory mode: Same as data size + overhead

1. Choose the Right Mode

# Use :memory: for temporary operations temp_db = SQLiteDB(":memory:") # Use file DB for persistent storage persist_db = SQLiteDB("agent_storage.db")

2. Use Proper Indexes

# Always index columns used in WHERE clauses db.create_index("table", "column_name") # Index multiple columns for composite queries db.create_index("table", "col1, col2")

3. Batch Operations

# Instead of individual inserts: for row in rows: db.insert("table", row) # Slow! # Use batch insert: db.batch_insert("table", rows) # Fast!

4. Use TTL for Expiring Data

# Auto-cleanup old data db.cleanup_expired("cache", "expires_at") db.cleanup_old("logs", "created_at", days=7)

5. Compact Database Periodically

# Reclaim space after many deletes db.vacuum() # Should be run during downtime

DuckDB Alternative (Analytics)

For analytical queries (aggregations, joins on large datasets), consider DuckDB: import duckdb conn = duckdb.connect(":memory:") # Faster than SQLite for complex analytics conn.execute(""" SELECT COUNT(*) as rows, AVG(value) as avg_value FROM large_table """).fetchall() When to use DuckDB: Analytics on large datasets (>100M rows) Complex aggregations and joins Columnar data operations Statistical analysis When to use SQLite: Transactional operations Small to medium datasets (<100M rows) Point queries and updates General-purpose storage

1. Memo Storage

def save_memo(db, agent_id, key, value, ttl_hours=24): expires_at = (datetime.now() + timedelta(hours=ttl_hours)).isoformat() db.insert("agent_memos", { "agent_id": agent_id, "key": key, "value": json.dumps(value), "expires_at": expires_at })

2. Session Persistence

def save_session(db, session_id, agent, message, metadata=None): db.insert("session_logs", { "session_id": session_id, "agent": agent, "message": message, "metadata": json.dumps(metadata) if metadata else None })

3. Caching Layer

def cache_get(db, key): if expired_key := db.query_one( "SELECT value FROM cache WHERE key = ? AND expires_at > ?", (key, datetime.now().isoformat()) ): return json.loads(expired_key) return None def cache_set(db, key, value, ttl_seconds=3600): expires_at = (datetime.now() + timedelta(seconds=ttl_seconds)).isoformat() db.insert_or_replace("cache", { "key": key, "value": json.dumps(value), "expires_at": expires_at })

Error Handling

try: db.insert("metrics", {...}) except sqlite3.IntegrityError: # Duplicate key violation pass except sqlite3.OperationalError: # Table doesn't exist or database locked pass

Reduce Storage

Use appropriate data types: INTEGER instead of TEXT for numbers REAL instead of TEXT for floats Use CHECK constraints for validation Normalize data: Store JSON as TEXT Use TEXT for variable-length strings Avoid storing redundant data Vacuum regularly: db.vacuum() # Reclaims space after deletes Use WAL instead of journal: conn.execute("PRAGMA journal_mode=WAL")

From JSON Files

# Load JSON into SQLite import json with open("data.json") as f: data = json.load(f) db.create_table("json_data", {key: "TEXT" for key in data[0].keys()}) db.batch_insert("json_data", data)

From CSV Files

import pandas as pd df = pd.read_csv("data.csv") df.to_sql("csv_data", conn, if_exists="replace", index=False)

Database Locked Error

# Use WAL mode for concurrent access conn.execute("PRAGMA journal_mode=WAL") # Or use connection pool pool = ConnectionPool("db.db", timeout=5.0)

Slow Queries

# Check query plan plan = conn.execute("EXPLAIN QUERY PLAN SELECT * FROM ...").fetchall() # Add indexes db.create_index("table", "column") # Use ANALYZE conn.execute("ANALYZE")

Large Database Size

# Check size info size_info = conn.execute("PRAGMA page_count, page_size").fetchone() print(f"Size: {(page_count * page_size) / (1024*1024):.2f} MB") # Vacuum to reclaim space db.vacuum()

CLI Tool

The bundled sqlite_cli.py provides command-line access: # Create database python scripts/sqlite_cli.py create agent_data.db # Add table python scripts/sqlite_cli.py create-table agent_memos -c id:INTEGER:P -c title:TEXT -c content:TEXT # Insert data python scripts/sqlite_cli.py insert agent_memos '{"title": "Test", "content": "Hello"}' # Query data python scripts/sqlite_cli.py query "SELECT * FROM agent_memos" # Optimize python scripts/sqlite_cli.py optimize agent_data.db

Resources

SQLite Documentation: https://www.sqlite.org/docs.html Python sqlite3: https://docs.python.org/3/library/sqlite3.html DuckDB: https://duckdb.org/docs/ Performance: https://www.sqlite.org/optoverview.html

Category context

Long-tail utilities that do not fit the current primary taxonomy cleanly.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
2 Scripts1 Docs
  • SKILL.md Primary doc
  • scripts/sqlite_cli.py Scripts
  • scripts/sqlite_connector.py Scripts