Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Design schemas, write queries, and configure MongoDB for consistency and performance.
Design schemas, write queries, and configure MongoDB for consistency and performance.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
User needs MongoDB expertise — from schema design to production optimization. Agent handles document modeling, indexing strategies, aggregation pipelines, consistency patterns, and scaling.
TopicFileSchema design patternsschema.mdIndex strategiesindexes.mdAggregation pipelineaggregation.mdProduction configurationproduction.md
Embed when data is queried together and doesn't grow unboundedly Reference when data is large, accessed independently, or many-to-many Denormalize for read performance, accept update complexity—no JOINs means duplicate data Design for your queries, not for normalized elegance
16MB max per document—plan for this from day one; use GridFS for large files Arrays that grow infinitely = disaster—use bucketing pattern instead BSON overhead: field names repeated per document—short names save space at scale Nested depth limit 100 levels—rarely hit but exists
Arrays > 1000 elements hurt performance—pagination inside documents is hard $push without $slice = unbounded growth; use $push: {$each: [...], $slice: -100} Multikey indexes on arrays: index entry per element—can explode index size Can't have multikey index on more than one array field in compound index
$lookup performance degrades with collection size—no index on foreign collection (until 5.0) One $lookup per pipeline stage—nested lookups get complex and slow $lookup with pipeline (5.0+) can filter before joining—massive improvement Consider: if you $lookup frequently, maybe embed instead
ESR rule: Equality fields first, Sort fields next, Range fields last MongoDB doesn't do efficient index intersection—single compound index often better Only one text index per collection—plan carefully; use Atlas Search for complex text TTL index for auto-expiration: {createdAt: 1}, {expireAfterSeconds: 86400}
Default read/write concern not fully consistent—{w: "majority", readConcern: "majority"} for strong Multi-document transactions since 4.0—but add latency and lock overhead; design to minimize Single-document operations are atomic—exploit this by embedding related data retryWrites: true in connection string—handles transient failures automatically
Stale reads on secondaries—replication lag can be seconds nearest for lowest latency—but may read stale data Write always goes to primary—read preference doesn't affect writes Read your own writes: use primary or session-based causal consistency
Contains timestamp: ObjectId.getTimestamp()—extract creation time without extra field Roughly time-ordered—can sort by _id for creation order without createdAt Not random—predictable if you know creation time; don't rely on for security tokens
explain("executionStats") shows actual execution—not just theoretical plan totalDocsExamined vs nReturned ratio should be ~1—otherwise index missing COLLSCAN in explain = full collection scan—add appropriate index Covered queries: IXSCAN + totalDocsExamined: 0—all data from index
Pipeline stages are transformations—think of data flowing through Filter early ($match), project early ($project)—reduce data volume ASAP $match at start can use indexes; $match after $unwind cannot Test complex pipelines stage by stage—build incrementally
Treating MongoDB as "schemaless"—still need schema design; just enforced in app not DB Not adding indexes—scans entire collection; every query pattern needs index Giant documents via array pushes—hit 16MB limit or slow BSON parsing Ignoring write concern—data may appear written but not persisted/replicated
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.