Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Manage TimeCamp time tracking by starting/stopping timers, adding, updating, removing entries, and listing tasks or entries by date range.
Manage TimeCamp time tracking by starting/stopping timers, adding, updating, removing entries, and listing tasks or entries by date range.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Two tools: CLI for quick personal actions (timer, entries CRUD) and Data Pipeline for analytics/reports.
Before using either tool: Ask user where repos should live (default: ~/utils, but any location is valid). If repos are missing in that chosen location, ask for confirmation to clone. Example flow and commands: # Ask first: # "I don't see TimeCamp repos locally. Clone to ~/utils, or use a different location?" REPOS_DIR=~/utils # replace if user picked a different path mkdir -p "$REPOS_DIR" if [ ! -d "$REPOS_DIR/timecamp-cli/.git" ]; then git clone https://github.com/timecamp-org/timecamp-cli.git "$REPOS_DIR/timecamp-cli" fi if [ ! -d "$REPOS_DIR/good-enough-timecamp-data-pipeline/.git" ]; then git clone https://github.com/timecamp-org/good-enough-timecamp-data-pipeline.git "$REPOS_DIR/good-enough-timecamp-data-pipeline" fi
CLI at ~/utils/timecamp-cli, installed globally via npm link. IntentCommandCurrent timer statustimecamp statusStart timertimecamp start --task "Project A" --note "description"Stop timertimecamp stopToday's entriestimecamp entriesEntries by datetimecamp entries --date 2026-02-04Entries date rangetimecamp entries --from 2026-02-01 --to 2026-02-04All users entriestimecamp entries --from 2026-02-01 --to 2026-02-04 --all-usersAdd entrytimecamp add-entry --date 2026-02-04 --start 09:00 --end 10:30 --duration 5400 --task "Project A" --note "description"Update entrytimecamp update-entry --id 101234 --note "Updated" --duration 3600Remove entrytimecamp remove-entry --id 101234List taskstimecamp tasks
Python pipeline at ~/utils/good-enough-timecamp-data-pipeline. Use this for all analytics, reports, and bulk data fetching.
cd ~/utils/good-enough-timecamp-data-pipeline && \ uv run --with-requirements requirements.txt dlt_fetch_timecamp.py \ --from YYYY-MM-DD --to YYYY-MM-DD \ --datasets DATASETS \ --format jsonl \ --output ~/data/timecamp-data-pipeline
DatasetDescriptionentriesTime entries with project/task detailstasksProjects & tasks hierarchy with breadcrumb pathscomputer_activitiesDesktop app tracking datausersUser details with group info and enabled statusapplication_namesApplication lookup table (ID โ name, category)
Files land in ~/data/timecamp-data-pipeline/timecamp/*.jsonl.
cd ~/utils/good-enough-timecamp-data-pipeline && \ uv run --with-requirements requirements.txt dlt_fetch_timecamp.py \ --from 2026-02-11 --to 2026-02-14 \ --datasets entries,users,tasks \ --format jsonl --output ~/data/timecamp-data-pipeline cd ~/utils/good-enough-timecamp-data-pipeline && \ uv run --with-requirements requirements.txt dlt_fetch_timecamp.py \ --from 2026-01-01 --to 2026-02-14 \ --datasets computer_activities,users,application_names \ --format jsonl --output ~/data/timecamp-data-pipeline cd ~/utils/good-enough-timecamp-data-pipeline && \ uv run --with-requirements requirements.txt dlt_fetch_timecamp.py \ --from 2026-01-01 --to 2026-02-14 \ --datasets computer_activities,users,application_names,entries,tasks \ --format jsonl --output ~/data/timecamp-data-pipeline
Query the persistent data store directly. DUCKDB=~/.duckdb/cli/latest/duckdb DATA=~/data/timecamp-data-pipeline/timecamp # Hours per person $DUCKDB -c " SELECT user_name, round(sum(TRY_CAST(duration AS DOUBLE))/3600.0, 1) as hours FROM read_json_auto('$DATA/entries*.jsonl') GROUP BY user_name ORDER BY hours DESC " # Hours per person per day $DUCKDB -c " SELECT user_name, date, round(sum(TRY_CAST(duration AS DOUBLE))/3600.0, 1) as hours FROM read_json_auto('$DATA/entries*.jsonl') GROUP BY user_name, date ORDER BY user_name, date " # Top applications by time (join activities with app names) $DUCKDB -c " SELECT COALESCE(an.full_name, an.application_name, an.app_name, 'Unknown') as app, round(sum(ca.time_span)/3600.0, 2) as hours FROM read_json_auto('$DATA/computer_activities*.jsonl') ca LEFT JOIN read_json_auto('$DATA/application_names*.jsonl') an ON ca.application_id = an.application_id GROUP BY 1 ORDER BY hours DESC LIMIT 20 " # People who logged < 30h in a given week $DUCKDB -c " SELECT user_name, round(sum(TRY_CAST(duration AS DOUBLE))/3600.0, 1) as hours FROM read_json_auto('$DATA/entries*.jsonl') WHERE date BETWEEN '2026-02-03' AND '2026-02-07' GROUP BY user_name HAVING sum(TRY_CAST(duration AS DOUBLE))/3600.0 < 30 ORDER BY hours "
Check existing data range with DuckDB, if data is missing, fetch it with the pipeline, if it's already there, use it Query with DuckDB: $DUCKDB -c "SELECT ... FROM read_json_auto('$DATA/entries*.jsonl') ..."
Duration (entries) is in seconds (3600 = 1h) time_span (activities) is also in seconds applications_cache.json in pipeline dir caches app name lookups For JSONL output, DuckDB glob *.jsonl catches all files for all datasets
Confirm before adding, updating, or removing entries Show the command before executing modifications When stopping a timer, show what was running first
Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.
Largest current source with strong distribution and engagement signals.