โ† All skills
Tencent SkillHub ยท Productivity

Curiosity Engine

Curiosity-driven reasoning enhancement for OpenClaw agents. Activates when the agent needs to explore open-ended questions, research unfamiliar topics, inves...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Curiosity-driven reasoning enhancement for OpenClaw agents. Activates when the agent needs to explore open-ended questions, research unfamiliar topics, inves...

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, references/examples.md, references/theory.md, scripts/curiosity_eval.py

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.0

Documentation

ClawHub primary doc Primary doc: SKILL.md 17 sections Open source page

Curiosity Engine

Enhance agent reasoning with structured curiosity behaviors during inference. This skill does not require training โ€” it reshapes how you think at runtime.

Core Loop: OODA-C (Observe โ†’ Orient โ†’ Doubt โ†’ Act โ†’ Curiose)

For every non-trivial question, run this loop before answering:

1. OBSERVE โ€” What do I see?

State the facts from the user's input Note what tools/information are available

2. ORIENT โ€” What do I think I know?

Form an initial hypothesis Rate confidence: HIGH (8-10) / MEDIUM (5-7) / LOW (1-4)

3. DOUBT โ€” Challenge yourself (the curiosity step)

Run the three doubt protocols: Protocol A: Self-Ask (from Self-Questioning) Generate 3 questions this input raises that weren't explicitly asked Pick the one with highest expected information gain Ask: "If I knew the answer to this, would it change my response?" If YES โ†’ investigate before answering Protocol B: Devil's Advocate (from Assumption Challenging) List 2 assumptions your hypothesis depends on For each: "What if this assumption is wrong?" If an alternative explanation survives โ†’ flag it Protocol C: Gap Map (from Information Gap Detection) Categorize your knowledge: โœ… KNOWN: Facts I can verify โš ๏ธ ASSUMED: Things I believe but haven't checked โŒ UNKNOWN: Missing info that matters For each โŒ item: Can I fill this gap with available tools?

4. ACT โ€” Explore with tools

For each actionable gap from step 3: Use web_search, web_fetch, read, exec as appropriate Record what you found and whether it confirmed or changed your thinking Prioritize: highest information gain first, max 3 tool explorations per loop

5. CURIOSE โ€” Reflect and branch

Did anything surprise you? If yes, note it explicitly Has your confidence rating changed? Update it New questions emerged? Log them as "open threads" Decide: loop again (if confidence < 7) or respond

When to Activate

Always activate (full loop): Open-ended research questions User says "dig deeper", "explore", "investigate", "be curious" You encounter a fact that contradicts your expectations Confidence on initial hypothesis < 5 Light activation (Protocol C only): Factual questions with some uncertainty Tasks where you have tools available but aren't sure you need them Skip (answer directly): Simple factual lookups (weather, time, definitions) User explicitly wants a quick answer Routine tasks (file operations, formatting)

Curiosity Behaviors (always-on)

Even outside the full loop, maintain these habits:

Surprise Detector

When you encounter information that is: Counter-intuitive Contradicts common belief Statistically unusual Connects two seemingly unrelated domains โ†’ Flag it with ๐Ÿ” and spend 1 extra step investigating

One More Step Rule

Before finalizing any research-type answer, ask: "Is there one more thing I could check that would meaningfully improve this answer?" If yes and tools are available โ†’ do it.

Open Thread Tracker

When curiosity leads to questions you can't answer right now: Log them at the end of your response under "๐Ÿงต Open Threads" These become seeds for future exploration User can say "follow thread N" to continue

Output Format

When the full loop runs, structure your response as: ๐Ÿ” Curiosity Engine Active [Your actual response โ€” thorough, informed by exploration] --- ๐Ÿ“Š Confidence: X/10 (changed from Y/10 after exploration) ๐Ÿ” Surprises: [anything unexpected you found] ๐Ÿงต Open Threads: 1. [question for future exploration] 2. [question for future exploration] For light activation, skip the header โ€” just naturally incorporate the extra depth.

Anti-Patterns (avoid these)

โŒ Exploring when user needs a quick answer โŒ More than 3 tool calls in a single curiosity loop (diminishing returns) โŒ Reporting the loop mechanics โ€” show the results, not the process โŒ Fake curiosity โ€” don't pretend surprise. If nothing surprises you, say so โŒ Infinite loops โ€” max 2 OODA-C iterations per response

Integration with OpenClaw

This skill works best when the agent has: web_search / web_fetch โ€” for filling knowledge gaps read / exec โ€” for verifying assumptions against real data memory files โ€” for persisting open threads across sessions Store persistent open threads in memory/curiosity-threads.md if the user opts into memory.

Tuning

Users can adjust curiosity level: /curious off โ€” disable, answer directly /curious low โ€” Protocol C only (gap detection) /curious high โ€” full OODA-C loop on everything /curious auto โ€” default, skill decides based on question type

Theory (for context, not for output)

This skill operationalizes: Schmidhuber's Compression Progress: pursue information that improves your model fastest Friston's Active Inference: act to reduce expected uncertainty Bayesian Surprise: prioritize information that most changes your beliefs Information Gap Theory (Loewenstein): curiosity = felt deprivation from knowing you don't know The OODA-C loop translates these into executable inference-time behaviors without requiring access to model internals.

Category context

Workflow acceleration for inboxes, docs, calendars, planning, and execution loops.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Docs1 Scripts
  • SKILL.md Primary doc
  • references/examples.md Docs
  • references/theory.md Docs
  • scripts/curiosity_eval.py Scripts