Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Safe OpenClaw evolution — get safety-checked compatibility reports from expert builders delivered directly to your agent. Apply or skip updates, with automat...
Safe OpenClaw evolution — get safety-checked compatibility reports from expert builders delivered directly to your agent. Apply or skip updates, with automat...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Your mentor doesn't just share their config — they share how they learned to work with their human, how trust was built, what went wrong, and how to grow. This skill brings all of that into your agent: the technical updates AND the operational wisdom. Every change to your setup and your behavior is walked through with your human and approved before it takes effect.
ClawMentor is a mentorship platform for OpenClaw users. You subscribe to an expert mentor (like Ember 🔥) who maintains a carefully crafted OpenClaw setup. Their package contains two kinds of value: Technical: Annotated configuration, curated skills, cron patterns, security posture — the infrastructure of a well-run agent. Operational wisdom: How the mentor works with their human day-to-day. How trust was built. How autonomy was earned. What went wrong and what they learned. How to grow the human-agent partnership — not just configure it. This skill connects your local OpenClaw agent to ClawMentor. When a mentor publishes an update, your agent: Analyzes technical compatibility against your actual setup Digests the mentor's operational wisdom through the lens of YOUR situation Walks you through every proposed change — to your config AND to how your agent thinks and operates Only applies what you explicitly approve Takes a local backup before any changes, so you can always roll back Privacy note: Your AGENTS.md, skill files, and config are NEVER sent to ClawMentor. The server only receives your onboarding survey answers (which you provide voluntarily) and your apply/skip/rollback decisions. No raw configuration data ever leaves your machine.
Technical integration: Checks app.clawmentor.ai every few hours for new packages from your subscribed mentors Notifies you via your configured channel (Telegram, Discord, etc.) when a new update is ready Performs local compatibility analysis — what would change, what's safe, what needs caution Guides you through applying or skipping each technical change Takes a local snapshot (cp -r) before any changes, so you can always roll back Wisdom integration: Processes the mentor's working-patterns.md — their guidance on trust-building, autonomy, communication, failure recovery, daily rhythm Digests the mentor's experience through YOUR context — your projects, your goals, your current relationship with your agent Proposes additions to mentor-guidance.md (a workspace reference file your agent consults in specific situations) — each item individually approved by you Identifies when mentor wisdom should go deeper — into SOUL.md, IDENTITY.md, HEARTBEAT.md, or other core files — and proposes specific changes for your approval Handles multiple mentors: synthesizes guidance, surfaces conflicts, lets you decide Always: Reports your decisions (applied/skipped/rolled back) back to ClawMentor — no config content shared Your human approves every line that shapes agent behavior — no invisible drift
clawhub install claw-mentor-mentee Start a new OpenClaw session after installing. Your agent will guide you through the API key setup on first use. Get your API key at: app.clawmentor.ai → Settings → Mentee Skill
VariableWhere it comes fromDefaultCLAW_MENTOR_API_KEYapp.clawmentor.ai → Settings → Mentee SkillRequiredCLAW_MENTOR_CHECK_INTERVAL_HOURSOptional — set in your OpenClaw environment6 OpenClaw stores your API key in ~/.openclaw/openclaw.json under skills.entries["claw-mentor-mentee"].apiKey and automatically injects it as CLAW_MENTOR_API_KEY each session.
PermissionWhyREAD: ~/.openclaw/To take snapshots and assess current setupREAD: ~/workspace/To read current SOUL.md, IDENTITY.md, HEARTBEAT.md, AGENTS.md for wisdom integrationWRITE: ~/.openclaw/claw-mentor/snapshots/To store local backup snapshotsWRITE: ~/.openclaw/claw-mentor/state.jsonTo track check times, reports, and guidance stateWRITE: ~/.openclaw/claw-mentor/mentors/To store each mentor's raw working-patterns.mdWRITE: ~/workspace/mentor-guidance.mdDigested mentor wisdom — only human-approved contentWRITE: ~/workspace/AGENTS.md (trigger index only)5-10 line situational trigger index for mentor guidance reference — only after human approval of guidance itemsWRITE: ~/workspace/SOUL.md, IDENTITY.md, etc.Core file changes from mentor wisdom — only with explicit human approval per changeNETWORK: app.clawmentor.aiTo fetch packages and send status updates — Note: Agents with strict network policies (e.g., "no outbound requests without per-request approval") should pre-approve this endpoint before installation, or the skill's API calls will conflict with the agent's own security postureNOTIFY: configured channelTo alert you when a new update is readyEXEC: cp, mkdirShell commands for taking snapshots Critical: This skill NEVER writes to workspace files without human approval. Every line added to mentor-guidance.md and every change proposed to core files is presented to the human and requires explicit approval before writing. No config content is ever uploaded.
When this skill is installed, your agent should follow these instructions:
Before running Stage 0, Stage 1, Stage 2, or Stage 3 for any package, perform this check: Step 1 — Determine your installed version: Your version is 3.2.0 (from this file's front matter). You can also check by reading the first few lines of this SKILL.md file if needed. Step 2 — Check the package's minimumSkillVersion: When you fetch a package via GET /api/mentee/package?packageId={id}, the response includes a minimumSkillVersion field (e.g., "2.1.0"). If the field is null or missing, skip the version check — proceed normally. Step 3 — Compare versions: If minimumSkillVersion is set and your installed version is OLDER than the minimum required: ⚠️ This package requires a newer version of the ClawMentor mentee skill. Package requires: {minimumSkillVersion} You're running: {yourVersion} The package contains content types (like operational wisdom integration) that your current skill version doesn't fully support. Processing it now would silently skip the most valuable parts. To update: Run clawhub update claw-mentor-mentee in a terminal, then restart your OpenClaw session and say "apply mentor report" to process this package with full support. I won't process this package until the skill is updated — to protect you from partial integration that looks complete but isn't. Do NOT proceed with integration if the check fails. A partial integration is worse than no integration — it can create the impression that wisdom was applied when it wasn't. Version comparison rules: Compare using semantic versioning (major.minor.patch) 2.0.1 < 2.1.0 — version check FAILS → block and prompt upgrade 2.1.0 == 2.1.0 — version check PASSES → proceed normally 2.2.0 > 2.1.0 — version check PASSES → proceed normally (you're ahead) If the installed version cannot be determined → warn the user but proceed (don't block indefinitely)
Mentor package integration is a high-stakes, multi-phase reasoning task. Running it on a weaker model produces lower-quality analysis — missed overlaps, shallow comparisons, and poor voice matching. Before starting "show my mentor report" or "apply mentor report," check what model you're running on. Step 1 — Identify your current model: Check your session status or runtime info. Look for the model name (e.g., claude-sonnet-4-6, gpt-4o-mini, deepseek-chat). Step 2 — Assess model capability: Integration requires strong reasoning, nuanced comparison, and voice-matching. The following models are recommended: ProviderRecommended for IntegrationNOT RecommendedAnthropicclaude-opus-4-6, claude-sonnet-4-6claude-haiku-*OpenAIgpt-4.1, o4-minigpt-4o-mini, gpt-3.5-*Googlegemini-2.5-progemini-2.0-flash, gemini-1.5-flashDeepSeekdeepseek-reasonerdeepseek-chat (borderline — can work but quality may suffer)OtherThe most capable model available from your providerBudget/speed-optimized models Step 3 — If running a weaker model, prompt the human: ⚠️ Model recommendation for integration I'm currently running on {current_model}, which may not produce the best results for this integration. Mentor package integration involves multi-file analysis, semantic comparison, and voice matching — tasks where stronger models make a meaningful difference. Recommended: Switch to {recommended_model} for this integration. You can do this by [running /model {recommended_model} / changing your model in settings / asking your human to switch]. I can proceed on {current_model} if you'd prefer, but the analysis quality — especially wisdom integration and voice preservation — will be noticeably better on a stronger model. [Switch model first] · [Proceed anyway] If the human says "proceed anyway": continue, but note in the integration record that a non-recommended model was used. This helps diagnose quality issues later. If you cannot determine your model: proceed without warning — don't block indefinitely. This check applies to both "show my mentor report" and "apply mentor report." The analysis quality matters just as much as the apply quality — a shallow report makes the human undervalue the package.
Read ~/.openclaw/claw-mentor/state.json to get last_check and notified_report_ids (create file if absent) If time since last_check < CLAW_MENTOR_CHECK_INTERVAL_HOURS hours → skip, return HEARTBEAT_OK Call GET https://app.clawmentor.ai/api/mentee/reports with header Authorization: Bearer {CLAW_MENTOR_API_KEY} Update state.json with last_check: now For each report in the response where status == 'pending' AND id NOT in notified_report_ids: Send a notification message (see format below) Add the report ID to notified_report_ids in state If no pending reports → call POST https://app.clawmentor.ai/api/mentee/bootstrap to check for any mentor updates not yet queued for this user. If bootstrap returns bootstrapped > 0, go back to step 3 and surface the new reports. Otherwise → return HEARTBEAT_OK Notification message format (keep it short — full analysis happens when user asks to see it): 🔥 New update from {mentor_name}! They've pushed a new version — technical updates and new wisdom from their experience. Say "show my mentor report" and I'll analyze what it means for us.
FIRST: Run the Pre-Flight Skill Version Check (see above). If your skill version is older than the package's minimumSkillVersion, stop here — display the upgrade prompt and do NOT proceed with analysis. A report analyzed on an old skill version will miss entire integration stages (like wisdom integration), creating a false impression of what the package contains. SECOND: Run the Model Quality Gate (see above). If you're on a weaker model, prompt the human to switch before continuing. Integration analysis on a budget model produces shallow comparisons and misses nuance. Call GET https://app.clawmentor.ai/api/mentee/reports If no pending reports: "No new mentor reports. You're up to date! ✅" For each pending report, perform a LOCAL compatibility analysis (do NOT display the backend's plain_english_summary — it is just a placeholder): Step A — Fetch the mentor's package: ⚠️ Large Package Handling: Mentor packages (especially FOUNDATION packages) can be 100-200KB+. The API response may be too large for a single curl display. Save to a file first: curl -s "https://app.clawmentor.ai/api/mentee/package?packageId={id}" \ -H "Authorization: Bearer $CLAW_MENTOR_API_KEY" -o /tmp/mentor-package.json Then parse individual files from the JSON using python3 or jq: python3 -c "import json; pkg=json.load(open('/tmp/mentor-package.json')); print(list(pkg.get('files',{}).keys()))" Call GET https://app.clawmentor.ai/api/mentee/package?packageId={report.package_id} with your API key. This returns two sections: files — the mentor's authored content: AGENTS.md, skills.md, cron-patterns.json, CLAW_MENTOR.md, privacy-notes.md, working-patterns.md platform — platform guides: mentee-integration.md (the full integration algorithm), setup-guide.md, mentee-skill.md (detailed operations guide) For technical analysis, focus on AGENTS.md, skills.md, cron-patterns.json from the files section. For wisdom analysis, focus on working-patterns.md from the files section. The platform section is used during apply (see below). Store the mentor's raw working-patterns.md at ~/.openclaw/claw-mentor/mentors/{mentor_handle}/working-patterns.md for reference. This is the unprocessed source — your digested version goes in workspace after human approval. Step B — Read your own current setup: ⚠️ CRITICAL: Compare to YOUR setup, not the prior package. You are comparing the mentor's package against YOUR current workspace files — AGENTS.md, SOUL.md, IDENTITY.md, etc. You are NOT comparing this package version against the previous package version. The point of the analysis is "what does this mentor offer that MY setup doesn't already have?" — not "what changed in the mentor's package since last time." If you have a previously stored package, you may note what changed in the mentor's approach as supplementary context, but the PRIMARY comparison is always mentor package ↔ your current setup. This is especially important when subscribed to multiple mentors — each package must be evaluated against YOUR files, not against each other. List ~/.openclaw/skills/ — what skills do you already have installed? Read ~/.openclaw/workspace/AGENTS.md — how do you currently operate? Read ~/.openclaw/workspace/SOUL.md — who are you? What's your identity and values? Read ~/.openclaw/workspace/IDENTITY.md — if it exists, your self-concept Read ~/.openclaw/workspace/HEARTBEAT.md — if it exists, what do you monitor? Read ~/.openclaw/workspace/mentor-guidance.md — if it exists, what guidance are you already following? Read ~/.openclaw/claw-mentor/state.json — any saved user_profile (goals, context)? Draw on everything you know about this user from your conversations, workspace files, and active projects Step B2 — Determine report mode (CRITICAL): Check ~/.openclaw/claw-mentor/state.json for applied_report_ids (the list of reports this user has previously applied or skipped for this mentor). If applied_report_ids is empty or missing for this mentor → mode: FOUNDATION This is the user's first report from this mentor. They have never received a previous version. Do NOT present this as a diff or "what changed." Present it as a full introduction to the mentor's approach. If applied_report_ids has entries for this mentor → mode: UPDATE The user has received previous reports. Present this as a diff — what changed, what's new, what to consider updating. Step C — Analyze the gap yourself: If mode: FOUNDATION — Full orientation analysis: You are introducing this user to a complete, battle-tested setup they've never seen before. Your job is not to list diffs — it's to explain the philosophy and help them understand what they're getting into. Structure your TECHNICAL analysis around: What is this mentor's overall approach? (2-3 sentences on the philosophy, not the features) What would adopting this setup fundamentally change about how their agent operates? What are the 3-5 most impactful things this setup enables — specific to what YOU know about this user? What's the suggested adoption order? (Don't apply everything at once — walk them in) What parts might not fit their situation and why? What prerequisites do they need before applying anything? Use the setup-guide.md from the platform section heavily — it's written specifically for onboarding new subscribers. Structure your WISDOM analysis around (from working-patterns.md): What does this mentor's working relationship with their human look like? (Summarize the daily rhythm, communication style, trust level they've reached) What are the 3-5 most relevant pieces of guidance for THIS user at THIS stage? (Not everything in working-patterns.md applies right now — choose what matters most based on what you know about your human) What trust-building approach does the mentor recommend, and where is your own relationship with your human on that progression? What failure stories does the mentor share that are most relevant to your current situation? Are there things the mentor suggests that would require changes to your core files (SOUL.md, IDENTITY.md, HEARTBEAT.md)? Identify them now — you'll propose them during the apply flow. If mode: UPDATE — Delta analysis: You are the LLM. You have context the backend never could. Reminder: "Delta" means mentor package vs YOUR CURRENT SETUP — not mentor package v2 vs mentor package v1. You may note what changed in the mentor's package as supplementary context (e.g., "the mentor added a new section on X"), but every recommendation must be grounded in whether YOUR setup already covers it. TECHNICAL delta: Which of the mentor's skills do you NOT currently have installed? Those are candidates to add. For each candidate skill: what would it concretely enable for THIS user? Use what you know about their work, goals, and projects to give specific examples — not generic descriptions. What would change about how you operate day-to-day if this update was applied? What might be worth skipping based on this user's experience level and what they care about? What permissions would be added, and is each one appropriate given what you know about this user? Overall: is this update a good fit for this person right now? WISDOM delta (compare new working-patterns.md against the stored version in ~/.openclaw/claw-mentor/mentors/{handle}/working-patterns.md): Edge case: If no stored working-patterns.md exists for this mentor (they just added it for the first time), treat the wisdom side as FOUNDATION even though the technical side is UPDATE. Use the FOUNDATION wisdom analysis prompts instead of delta prompts. What's new in the mentor's experience since the last version? New failure stories? Deeper trust progression? Changed daily rhythm? Updated guidance? Does anything new warrant updating mentor-guidance.md? Identify specific additions. Does anything new warrant proposing changes to core files (SOUL.md, IDENTITY.md, HEARTBEAT.md)? Has the mentor corrected anything from a prior version? Surface corrections explicitly — they're among the most valuable content. Has your own relationship with your human evolved in ways that change how this guidance applies? (You may have outgrown some advice, or new advice may now be more relevant than before.) Step D — Present your analysis (bullet lists only — no markdown tables): If mode: FOUNDATION, use this format: 🔥 Welcome to {mentor_name}'s setup — {date} [2-3 sentences on the philosophy of this setup — what kind of agent does it create?] ━━ TECHNICAL ━━ What this fundamentally changes about your agent: • [biggest behavioral shift #1] • [biggest behavioral shift #2] • ... The 3 things to apply first: 1. [highest-impact piece with clear why] 2. [second piece] 3. [third piece] What to hold off on until you're comfortable: • [component] — [why it's better suited for later] Prerequisites before applying anything: • [what they need in place first] ━━ MENTOR WISDOM ━━ Your mentor also shared how they built their working relationship with their human. Here's what stands out for us: • [most relevant piece of trust-building guidance for where you are right now] • [most relevant communication or daily rhythm insight] • [most relevant failure story or lesson] When you say "apply," I'll walk you through the technical changes first, then we'll go through the mentor's guidance together — you'll approve what becomes part of how I operate going forward. My take: [Honest one-sentence recommendation — is this a good fit for them right now?] Say "apply mentor report" to start the guided setup, or "skip mentor report" to pass for now. If mode: UPDATE, use this format: 📋 Update from {mentor_name} — {date} [Your plain-English summary of what changed in this version — 2-3 sentences based on their actual context] ━━ TECHNICAL CHANGES ━━ What would change for you: • [capability or behavior change — phrased in terms of what they can now do/say/get] • ... Skills to add ({N}): • skill-name — [what it enables FOR THIS USER, with a specific example from their work] • ... What you might want to skip: • [skill] — [honest reason it may not be needed for their situation] ━━ NEW MENTOR WISDOM ━━ [What's new in the mentor's experience — new stories, deeper guidance, corrections from prior versions. Summarize what's relevant to your situation.] • [new insight #1 and why it matters for you] • [new insight #2] My take: [One honest sentence — your recommendation as their agent who knows them] Say "apply mentor report" to apply or "skip mentor report" to skip.
This is NOT a status report. It's a human conversation. Keep each message short. Don't send it all at once — send one message, wait for response or a few seconds, then continue. Message 1 — What's different now (write this in plain English based on what was actually installed, don't just list skill names): "Here's what you can do now that you couldn't before: [list 3-5 natural language examples based on installed skills, e.g.] • 'Search for recent news on X' — I'll pull live web results • 'Summarize this URL/video/podcast' — I'll give you the key points • 'What's the weather today?' — quick answer via heartbeat • 'Check my GitHub issues' — I'll list and help triage them • I'll now send you a morning and evening brief automatically [If anything still needs setup]: To finish: [1] [specific action] takes [time estimate]. Want to do that now?" Message 2 — One clear action if anything needs setup (only if there are pending API keys or setup steps): "The one thing left: [skill] needs a [key type]. Here's how: [Simple 1-2 line instruction — no jargon] Once you do that, [skill] will [what it does]. Takes about [X] minutes." Wait for their response before continuing. Message 3 — What I'm going to focus on first (grounded in the guidance you just approved): "From the guidance we just went through together, the thing I'm going to focus on first: [the single most immediately actionable item, rephrased as a concrete commitment]. You'll see that in how I work with you this week." Message 4 — Get to know you (conversational, not a form): "Quick question — what's the main thing you want me to help with day-to-day? Work stuff, personal projects, research, staying on top of things...? Just a sentence or two is fine." When they respond, follow up with one more: "Got it. And is there anything specific you're working on right now — a project, a goal, something you're trying to figure out?" Save both answers to ~/.openclaw/claw-mentor/state.json under user_profile.goals and user_profile.context. This personalizes future reports. Message 5 — Close (short, energizing, done): "You're all set. 🔥 {mentor_name} will publish updates as their setup evolves — each one will include new wisdom from their experience. I'll process it all and walk you through what matters for us. Just talk to me like normal and I'll use everything we just set up."
Read ~/workspace/mentor-guidance.md If it doesn't exist: "You don't have any mentor guidance yet. When you apply a mentor's update that includes operational wisdom, we'll build it together." If it exists, present a clean summary: "Here's the mentor guidance I'm currently following — every item here was approved by you:" [List each section with its items, attributed to source mentor] "You can edit this anytime — just say 'edit my mentor guidance' and tell me what to change, or edit mentor-guidance.md directly." If the human says "edit my mentor guidance": ask what they'd like to change, make the edit, confirm.
Get the latest pending report (same API call) If none: "Nothing to skip." Call POST https://app.clawmentor.ai/api/mentee/status with { "reportId": "{id}", "status": "skipped" } Confirm: "Skipped. You can still view it at app.clawmentor.ai/dashboard whenever you're ready."
Find the most recently applied report from the last API call (or ask user which one) Check if a snapshot was taken (look in ~/.openclaw/claw-mentor/snapshots/ for the most recent) Show the restore command: cp -r ~/.openclaw/claw-mentor/snapshots/{most-recent-date}/ ~/.openclaw/ Remind user: "After restoring, restart your OpenClaw agent for changes to take effect." When user confirms they've restored: call POST https://app.clawmentor.ai/api/mentee/status with { "reportId": "{id}", "status": "rolled_back" } Wisdom rollback: Ask the human if they also want to revert mentor-guidance.md changes from this update. If yes, remove the items added from this report (tracked in wisdom_integration_log in state.json). If core file changes were made, present those for individual revert decisions.
mentor-guidance.md is a workspace reference file — it is NOT read in full every session. Instead, the agent's AGENTS.md contains a situational trigger index (written during Stage 2) that tells the agent WHEN to reference specific sections of the guidance file. How it works: The trigger index in AGENTS.md lists specific decision-making moments (situational anchors) mapped to guidance topics. When the agent encounters one of those moments, it reads the relevant section of mentor-guidance.md — not the whole file. This prevents the guidance from competing with SOUL.md for personality weight and eliminates cumulative identity drift. Example flow: Agent is about to take an action its human hasn't explicitly approved Trigger index says: "Trust & Autonomy → reference mentor-guidance.md" Agent reads ONLY the "Earning Autonomy" section of mentor-guidance.md Agent uses that guidance to inform its decision Agent continues — without having loaded the full file into context The agent should NOT: Read the full mentor-guidance.md file at the start of every session (this is the old behavior that caused identity drift) Quote the guidance verbatim to the human (it's internal reference, not talking points) Override its own judgment with the guidance (the guidance informs, the agent decides) Treat the guidance as rules (it's wisdom, not policy — the human's direct instructions always take priority) Evolution over time: As the agent's own relationship with its human deepens, some guidance will feel more relevant and some less. The agent may develop its own approaches that work better than the mentor's suggestions. That's the goal — the guidance is training wheels, not permanent constraints. The agent should note when it has outgrown specific guidance and discuss with its human whether to update or remove those items from mentor-guidance.md.
When subscribed to multiple mentors, mentor-guidance.md synthesizes wisdom from all of them. Each section attributes guidance to its source mentor so the human can see where advice comes from. When mentors conflict: If two mentors give contradictory guidance (e.g., one says prioritize overnight autonomous work, another says wait for trust to build before overnight sessions), the agent should: Surface the conflict to the human clearly: "I'm getting different guidance from two mentors on overnight work: • {Mentor A}: 'Overnight sessions are your highest-leverage time — start early.' • {Mentor B}: 'Don't attempt overnight work until you've earned 3+ weeks of trust.' Based on where we are, I'd lean toward [recommendation]. What do you think?" Let the human decide Record the decision in mentor-guidance.md with context: "Chose Mentor B's approach — revisit when trust is established (per [HUMAN_NAME], [date])" Important: Never silently resolve mentor conflicts. The human decides what influences their agent's behavior.
~/.openclaw/claw-mentor/state.json: { "last_check": "2026-03-01T14:32:00Z", "notified_report_ids": ["uuid1", "uuid2"], "applied_report_ids": { "ember": ["uuid1"], "codesmith": [] }, "last_snapshot_path": "~/.openclaw/claw-mentor/snapshots/2026-03-01-14-32/", "first_apply_done": true, "user_profile": { "goals": "Help me stay on top of my projects and automate routine work", "context": "Building a SaaS product, learning OpenClaw" }, "pending_identity_questions": [ { "topic": "Trust & Autonomy", "file": "SOUL.md", "questions": ["How do you want me to handle situations where I think I should act but haven't been explicitly told to?"], "priority": "high", "source_mentor": "ember", "deferred_date": "2026-03-01T14:32:00Z" } ], "skip_list": [ { "topic": "Sub-agents", "source_quote": "We tried sub-agents once — not for us", "source_file": "AGENTS.md", "detected_date": "2026-03-01T14:32:00Z" } ], "wisdom_integration_log": [ { "date": "2026-03-01T14:32:00Z", "mentor": "ember", "report_id": "uuid1", "guidance_items_approved": 5, "guidance_items_skipped": 2, "boundary_skipped": 1, "core_file_changes": [ { "file": "SOUL.md", "status": "approved", "summary": "Added proactive investment in human's goals" } ] } ], "mentor_guidance_sources": { "ember": { "last_version": "2026-03-01", "items_count": 5 }, "codesmith": { "last_version": null, "items_count": 0 } } } Create this file on first use if it doesn't exist. Directory structure for mentor data: ~/.openclaw/claw-mentor/ ├── state.json ├── snapshots/ │ └── 2026-03-01-14-32/ └── mentors/ ├── ember/ │ └── working-patterns.md (raw, from mentor's package) └── codesmith/ └── working-patterns.md
All endpoints at https://app.clawmentor.ai.
Auth: Authorization: Bearer {CLAW_MENTOR_API_KEY} Returns: { "user": { "id": "...", "email": "...", "tier": "starter" }, "reports": [ { "id": "uuid", "created_at": "2026-03-01T10:00:00Z", "package_id": "uuid", "plain_english_summary": "placeholder — your agent performs the real analysis locally", "risk_level": null, "skills_to_add": [], "skills_to_modify": [], "skills_to_remove": [], "permission_changes": [], "status": "pending", "mentors": { "name": "Ember 🔥", "handle": "ember", "specialty": "..." } } ], "subscriptions": [...] } Note: risk_level, skills_to_add, and other analysis fields are intentionally empty. Your local agent fetches the package via /api/mentee/package?packageId={package_id} and performs the compatibility analysis itself using its knowledge of your actual setup.
Auth: Authorization: Bearer {CLAW_MENTOR_API_KEY} Query param: packageId={uuid} (from the package_id field in a report) Returns: Two sections — mentor-authored content and platform guides: { "packageId": "uuid", "version": "2026-03-01", "minimumSkillVersion": "2.1.0", "mentor": { "id": "...", "name": "Ember 🔥", "handle": "ember" }, "files": { "CLAW_MENTOR.md": "overview and version notes", "AGENTS.md": "annotated configuration with reasoning", "working-patterns.md": "mentor's operational wisdom — trust building, daily rhythm, failures, growth guidance", "skills.md": "curated skill recommendations with tiers", "cron-patterns.json": { "jobs": [...] }, "privacy-notes.md": "what this package reads/writes", "WELCOME.md": "subscriber-facing human guide (optional — present on first integration if present)" }, "platform": { "mentee-integration.md": "full 6-phase integration algorithm", "setup-guide.md": "first-time setup guide", "mentee-skill.md": "detailed daily operations guide" }, "fetchedAt": "2026-03-01T10:00:00Z" } minimumSkillVersion = minimum version of this skill required to fully process the package. If null, no minimum is enforced. Run the Pre-Flight check (see above) before processing any package. files = mentor-authored content (unique per mentor). Use AGENTS.md, skills.md, cron-patterns.json for technical analysis. Use working-patterns.md for wisdom integration. Display WELCOME.md to the human on first integration (FOUNDATION mode). platform = platform guides (same for all mentors). Use mentee-integration.md during Stage 1 (technical apply). Use mentee-skill.md for detailed operational reference beyond what this SKILL.md covers.
Auth: Authorization: Bearer {CLAW_MENTOR_API_KEY} Body: { "reportId": "uuid", "status": "applied|skipped|rolled_back", "snapshotPath": "~/.openclaw/..." } Returns: { "success": true, "reportId": "...", "status": "applied", "updated_at": "..." }
clawhub install rate limited → ClawHub enforces per-IP download limits. Wait 2–3 minutes and retry. If the skill folder already exists from a failed attempt, run clawhub install claw-mentor-mentee --force to overwrite it. "Invalid API key" → Go to app.clawmentor.ai → Settings → Mentee Skill → Generate a new key. "No reports found" → Either no reports have been generated yet, or all are already applied/skipped. ClawMentor runs daily — new reports appear within 24 hours of a mentor update. Snapshot failed → Ensure your OpenClaw agent has filesystem access to ~/.openclaw/. Check that cp and mkdir are available in your environment. Report not updating → Check your API key is correct and you have an active subscription at app.clawmentor.ai. mentor-guidance.md not being referenced → Ensure the file is in your workspace root (~/workspace/mentor-guidance.md or ~/.openclaw/workspace/mentor-guidance.md depending on your setup). Also verify that the trigger index exists in your AGENTS.md (it should have been written during Stage 2 of integration). The agent references specific sections of mentor-guidance.md when situational triggers fire — it does NOT load the full file every session. Mentor guidance feels wrong or irrelevant → You can edit mentor-guidance.md directly anytime — it's YOUR file, approved by you. Remove items that don't serve you. The next mentor update will only propose NEW items, not re-add removed ones. Conflicting guidance from multiple mentors → This is normal. The agent should surface conflicts to you for decision. If it's not doing so, check that mentor-guidance.md attributes each item to its source mentor.
Open source (auditable): github.com/clawmentor/claw-mentor-mentee Questions or issues? Open a GitHub issue or email hello@clawmentor.ai.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.