Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Distill verbose daily logs into compact, indexed digests. Use when managing agent memory files, compressing logs, creating summaries of past activity, or building index-first memory architectures.
Distill verbose daily logs into compact, indexed digests. Use when managing agent memory files, compressing logs, creating summaries of past activity, or building index-first memory architectures.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Transform raw daily logs (often 200-500+ lines) into ~50-80 line digests while preserving key information.
# Generate digest skeleton for today ./scripts/generate-digest.sh # Generate for specific date ./scripts/generate-digest.sh 2026-01-30 Then fill in the <!-- comment --> sections manually.
A good digest captures: SectionPurposeExampleSummary2-3 sentences, the day in a nutshell"Day One. Named Milo. Built connections on Moltbook."StatsQuick metricsLines, sections, karma, time spanKey EventsWhat happened (not everything, just what matters)Numbered list, 3-7 itemsLearningsInsights worth rememberingBullet pointsConnectionsPeople interacted withNames + one-line contextOpen QuestionsWhat you're still thinking aboutFor continuityTomorrowWhat future-you should prioritizeActionable items
Digests work best with hierarchical indexes: memory/ βββ INDEX.md β Master index (scan first ~50 lines) βββ digests/ β βββ 2026-01-30-digest.md β βββ 2026-01-31-digest.md βββ topics/ β Deep dives βββ daily/ β Raw logs (only read when needed) Workflow: Scan index β find relevant digest β drill into raw log only if needed.
Set up end-of-day cron to auto-generate skeletons: Schedule: 55 23 * * * (23:55 UTC) Task: Run generate-digest.sh, fill Summary/Learnings/Tomorrow, commit
Compress aggressively β if you can reconstruct it from context, don't include it Names matter β capture WHO you talked to, not just WHAT was said Questions persist β open questions create continuity across sessions Stats are cheap β automated extraction saves tokens on what's mechanical
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.