โ† All skills
Tencent SkillHub ยท Productivity

TimeCamp

Manage TimeCamp time tracking by starting/stopping timers, adding, updating, removing entries, and listing tasks or entries by date range.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Manage TimeCamp time tracking by starting/stopping timers, adding, updating, removing entries, and listing tasks or entries by date range.

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.1.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 12 sections Open source page

TimeCamp Skill

Two tools: CLI for quick personal actions (timer, entries CRUD) and Data Pipeline for analytics/reports.

Bootstrap (clone if missing)

Before using either tool: Ask user where repos should live (default: ~/utils, but any location is valid). If repos are missing in that chosen location, ask for confirmation to clone. Example flow and commands: # Ask first: # "I don't see TimeCamp repos locally. Clone to ~/utils, or use a different location?" REPOS_DIR=~/utils # replace if user picked a different path mkdir -p "$REPOS_DIR" if [ ! -d "$REPOS_DIR/timecamp-cli/.git" ]; then git clone https://github.com/timecamp-org/timecamp-cli.git "$REPOS_DIR/timecamp-cli" fi if [ ! -d "$REPOS_DIR/good-enough-timecamp-data-pipeline/.git" ]; then git clone https://github.com/timecamp-org/good-enough-timecamp-data-pipeline.git "$REPOS_DIR/good-enough-timecamp-data-pipeline" fi

Tool 1: TimeCamp CLI (personal actions)

CLI at ~/utils/timecamp-cli, installed globally via npm link. IntentCommandCurrent timer statustimecamp statusStart timertimecamp start --task "Project A" --note "description"Stop timertimecamp stopToday's entriestimecamp entriesEntries by datetimecamp entries --date 2026-02-04Entries date rangetimecamp entries --from 2026-02-01 --to 2026-02-04All users entriestimecamp entries --from 2026-02-01 --to 2026-02-04 --all-usersAdd entrytimecamp add-entry --date 2026-02-04 --start 09:00 --end 10:30 --duration 5400 --task "Project A" --note "description"Update entrytimecamp update-entry --id 101234 --note "Updated" --duration 3600Remove entrytimecamp remove-entry --id 101234List taskstimecamp tasks

Tool 2: Data Pipeline (analytics & reports)

Python pipeline at ~/utils/good-enough-timecamp-data-pipeline. Use this for all analytics, reports, and bulk data fetching.

Run command

cd ~/utils/good-enough-timecamp-data-pipeline && \ uv run --with-requirements requirements.txt dlt_fetch_timecamp.py \ --from YYYY-MM-DD --to YYYY-MM-DD \ --datasets DATASETS \ --format jsonl \ --output ~/data/timecamp-data-pipeline

Available datasets

DatasetDescriptionentriesTime entries with project/task detailstasksProjects & tasks hierarchy with breadcrumb pathscomputer_activitiesDesktop app tracking datausersUser details with group info and enabled statusapplication_namesApplication lookup table (ID โ†’ name, category)

Output structure

Files land in ~/data/timecamp-data-pipeline/timecamp/*.jsonl.

Examples

cd ~/utils/good-enough-timecamp-data-pipeline && \ uv run --with-requirements requirements.txt dlt_fetch_timecamp.py \ --from 2026-02-11 --to 2026-02-14 \ --datasets entries,users,tasks \ --format jsonl --output ~/data/timecamp-data-pipeline cd ~/utils/good-enough-timecamp-data-pipeline && \ uv run --with-requirements requirements.txt dlt_fetch_timecamp.py \ --from 2026-01-01 --to 2026-02-14 \ --datasets computer_activities,users,application_names \ --format jsonl --output ~/data/timecamp-data-pipeline cd ~/utils/good-enough-timecamp-data-pipeline && \ uv run --with-requirements requirements.txt dlt_fetch_timecamp.py \ --from 2026-01-01 --to 2026-02-14 \ --datasets computer_activities,users,application_names,entries,tasks \ --format jsonl --output ~/data/timecamp-data-pipeline

Analytics with DuckDB

Query the persistent data store directly. DUCKDB=~/.duckdb/cli/latest/duckdb DATA=~/data/timecamp-data-pipeline/timecamp # Hours per person $DUCKDB -c " SELECT user_name, round(sum(TRY_CAST(duration AS DOUBLE))/3600.0, 1) as hours FROM read_json_auto('$DATA/entries*.jsonl') GROUP BY user_name ORDER BY hours DESC " # Hours per person per day $DUCKDB -c " SELECT user_name, date, round(sum(TRY_CAST(duration AS DOUBLE))/3600.0, 1) as hours FROM read_json_auto('$DATA/entries*.jsonl') GROUP BY user_name, date ORDER BY user_name, date " # Top applications by time (join activities with app names) $DUCKDB -c " SELECT COALESCE(an.full_name, an.application_name, an.app_name, 'Unknown') as app, round(sum(ca.time_span)/3600.0, 2) as hours FROM read_json_auto('$DATA/computer_activities*.jsonl') ca LEFT JOIN read_json_auto('$DATA/application_names*.jsonl') an ON ca.application_id = an.application_id GROUP BY 1 ORDER BY hours DESC LIMIT 20 " # People who logged < 30h in a given week $DUCKDB -c " SELECT user_name, round(sum(TRY_CAST(duration AS DOUBLE))/3600.0, 1) as hours FROM read_json_auto('$DATA/entries*.jsonl') WHERE date BETWEEN '2026-02-03' AND '2026-02-07' GROUP BY user_name HAVING sum(TRY_CAST(duration AS DOUBLE))/3600.0 < 30 ORDER BY hours "

Pattern

Check existing data range with DuckDB, if data is missing, fetch it with the pipeline, if it's already there, use it Query with DuckDB: $DUCKDB -c "SELECT ... FROM read_json_auto('$DATA/entries*.jsonl') ..."

Important Notes

Duration (entries) is in seconds (3600 = 1h) time_span (activities) is also in seconds applications_cache.json in pipeline dir caches app name lookups For JSONL output, DuckDB glob *.jsonl catches all files for all datasets

Safety

Confirm before adding, updating, or removing entries Show the command before executing modifications When stopping a timer, show what was running first

Category context

Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
1 Docs
  • SKILL.md Primary doc