Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
DuckDB CLI specialist for SQL analysis, data processing and file conversion. Use for SQL queries, CSV/Parquet/JSON analysis, database queries, or data conversion. Triggers on "duckdb", "sql", "query", "data analysis", "parquet", "convert data".
DuckDB CLI specialist for SQL analysis, data processing and file conversion. Use for SQL queries, CSV/Parquet/JSON analysis, database queries, or data conversion. Triggers on "duckdb", "sql", "query", "data analysis", "parquet", "convert data".
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Helps with data analysis, SQL queries and file conversion via DuckDB CLI.
# CSV duckdb -c "SELECT * FROM 'data.csv' LIMIT 10" # Parquet duckdb -c "SELECT * FROM 'data.parquet'" # Multiple files with glob duckdb -c "SELECT * FROM read_parquet('logs/*.parquet')" # JSON duckdb -c "SELECT * FROM read_json_auto('data.json')"
# Create/open database duckdb my_database.duckdb # Read-only mode duckdb -readonly existing.duckdb
FlagFormat-csvComma-separated-jsonJSON array-tableASCII table-markdownMarkdown table-htmlHTML table-lineOne value per line
ArgumentDescription-c COMMANDRun SQL and exit-f FILENAMERun script from file-init FILEUse alternative to ~/.duckdbrc-readonlyOpen in read-only mode-echoShow commands before execution-bailStop on first error-header / -noheaderShow/hide column headers-nullvalue TEXTText for NULL values-separator SEPColumn separator
duckdb -c "COPY (SELECT * FROM 'input.csv') TO 'output.parquet' (FORMAT PARQUET)"
duckdb -c "COPY (SELECT * FROM 'input.parquet') TO 'output.csv' (HEADER, DELIMITER ',')"
duckdb -c "COPY (SELECT * FROM read_json_auto('input.json')) TO 'output.parquet' (FORMAT PARQUET)"
duckdb -c "COPY (SELECT * FROM 'data.csv' WHERE amount > 1000) TO 'filtered.parquet' (FORMAT PARQUET)"
CommandDescription.tables [pattern]Show tables (with LIKE pattern).schema [table]Show CREATE statements.databasesShow attached databases
CommandDescription.mode FORMATChange output format.output fileSend output to file.once fileNext output to file.headers on/offShow/hide column headers.separator COL ROWSet separators
CommandDescription.timer on/offShow execution time.echo on/offShow commands before execution.bail on/offStop on error.read file.sqlRun SQL from file
CommandDescription.edit or \eOpen query in external editor.help [pattern]Show help
csv - Comma-separated for spreadsheets tabs - Tab-separated json - JSON array jsonlines - Newline-delimited JSON (streaming)
duckbox (default) - Pretty ASCII with unicode box-drawing table - Simple ASCII table markdown - For documentation html - HTML table latex - For academic papers
insert TABLE - SQL INSERT statements column - Columns with adjustable width line - One value per line list - Pipe-separated trash - Discard output
ShortcutActionHome / EndStart/end of lineCtrl+Left/RightJump wordCtrl+A / Ctrl+EStart/end of buffer
ShortcutActionCtrl+P / Ctrl+NPrevious/next commandCtrl+RSearch historyAlt+< / Alt+>First/last in history
ShortcutActionCtrl+WDelete word backwardAlt+DDelete word forwardAlt+U / Alt+LUppercase/lowercase wordCtrl+KDelete to end of line
ShortcutActionTabAutocomplete / next suggestionShift+TabPrevious suggestionEsc+EscUndo autocomplete
Context-aware autocomplete activated with Tab: Keywords - SQL commands Table names - Database objects Column names - Fields and functions File names - Path completion
CREATE TABLE sales AS SELECT * FROM 'sales_2024.csv';
INSERT INTO sales SELECT * FROM 'sales_2025.csv';
COPY sales TO 'backup.parquet' (FORMAT PARQUET);
SELECT COUNT(*) as count, AVG(amount) as average, SUM(amount) as total FROM 'transactions.csv';
SELECT category, COUNT(*) as count, SUM(amount) as total FROM 'data.csv' GROUP BY category ORDER BY total DESC;
SELECT a.*, b.name FROM 'orders.csv' a JOIN 'customers.parquet' b ON a.customer_id = b.id;
DESCRIBE SELECT * FROM 'data.csv';
# Read from stdin cat data.csv | duckdb -c "SELECT * FROM read_csv('/dev/stdin')" # Pipe to another command duckdb -csv -c "SELECT * FROM 'data.parquet'" | head -20 # Write to stdout duckdb -c "COPY (SELECT * FROM 'data.csv') TO '/dev/stdout' (FORMAT CSV)"
Save common settings in ~/.duckdbrc: .timer on .mode duckbox .maxrows 50 .highlight on
.keyword green .constant yellow .comment brightblack .error red
Open complex queries in your editor: .edit Editor is chosen from: DUCKDB_EDITOR โ EDITOR โ VISUAL โ vi
Secure mode that restricts file access. When enabled: No external file access Disables .read, .output, .import, .sh etc. Cannot be disabled in the same session
Use LIMIT on large files for quick preview Parquet is faster than CSV for repeated queries read_csv_auto and read_json_auto guess column types Arguments are processed in order (like SQLite CLI) WSL2 may show incorrect memory_limit values on some Ubuntu versions
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.