โ† All skills
Tencent SkillHub ยท Developer Tools

Prompt Safe

Token-safe prompt assembly with memory orchestration. Use for any agent that needs to construct LLM prompts with memory retrieval. Guarantees no API failure due to token overflow. Implements two-phase context construction, memory safety valve, and hard limits on memory injection.

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Token-safe prompt assembly with memory orchestration. Use for any agent that needs to construct LLM prompts with memory retrieval. Guarantees no API failure due to token overflow. Implements two-phase context construction, memory safety valve, and hard limits on memory injection.

โฌ‡ 0 downloads โ˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
SKILL.md, references/memory_standards.md, references/token_estimation.md, scripts/prompt_assemble.py

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.0.4

Documentation

ClawHub primary doc Primary doc: SKILL.md 15 sections Open source page

Overview

A standardized, token-safe prompt assembly framework that guarantees API stability. Implements Two-Phase Context Construction and Memory Safety Valve to prevent token overflow while maximizing relevant context. Design Goals: โœ… Never fail due to memory-related token overflow โœ… Memory is always discardable enhancement, never rigid dependency โœ… Token budget decisions centralized at prompt assemble layer

When to Use

Use this skill when: Building or modifying any agent that constructs prompts Implementing memory retrieval systems Adding new prompt-related logic to existing agents Any scenario where token budget safety is required

Core Workflow

User Input โ†“ Need-Memory Decision โ†“ Minimal Context Build โ†“ Memory Retrieval (Optional) โ†“ Memory Summarization โ†“ Token Estimation โ†“ Safety Valve Decision โ†“ Final Prompt โ†’ LLM Call

Phase 0: Base Configuration

# Model Context Windows (2026-02-04) # - MiniMax-M2.1: 204,000 tokens (default) # - Claude 3.5 Sonnet: 200,000 tokens # - GPT-4o: 128,000 tokens MAX_TOKENS = 204000 # Set to your model's context limit SAFETY_MARGIN = 0.75 * MAX_TOKENS # Conservative: 75% threshold = 153,000 tokens MEMORY_TOP_K = 3 # Max 3 memories MEMORY_SUMMARY_MAX = 3 lines # Max 3 lines per memory Design Philosophy: Leave 25% buffer for safety (model overhead, estimation errors, spikes) Better to underutilize capacity than to overflow

Phase 1: Minimal Context

System prompt Recent N messages (N=3, trimmed) Current user input No memory by default

Phase 2: Memory Need Decision

def need_memory(user_input): triggers = [ "previously", "earlier we discussed", "do you remember", "as I mentioned before", "continuing from", "before we", "last time", "previously mentioned" ] for trigger in triggers: if trigger.lower() in user_input.lower(): return True return False

Phase 3: Memory Retrieval (Optional)

memories = memory_search(query=user_input, top_k=MEMORY_TOP_K) for mem in memories: summarized_memories.append(summarize(mem, max_lines=MEMORY_SUMMARY_MAX))

Phase 4: Token Estimation

Calculate estimated tokens for base_context + summarized_memories.

Phase 5: Safety Valve (Critical)

if estimated_tokens > SAFETY_MARGIN: base_context.append("[System Notice] Relevant memory skipped due to token budget.") return assemble(base_context) Hard Rules: โŒ Never downgrade system prompt โŒ Never truncate user input โŒ No "lucky splicing" โœ… Only memory layer is expendable

Phase 6: Final Assembly

final_prompt = assemble(base_context + summarized_memories) return final_prompt

Allowed in Long-Term Memory

โœ… User preferences / identity / long-term goals โœ… Confirmed important conclusions โœ… System-level settings and rules

Forbidden in Long-Term Memory

โŒ Raw conversation logs โŒ Reasoning traces โŒ Temporary discussions โŒ Information recoverable from chat history

Quick Start

Copy scripts/prompt_assemble.py to your agent and use: from prompt_assemble import build_prompt # In your agent's prompt construction: final_prompt = build_prompt(user_input, memory_search_fn, get_recent_dialog_fn)

scripts/

prompt_assemble.py - Complete implementation with all phases (PromptAssembler class)

references/

memory_standards.md - Detailed memory content guidelines token_estimation.md - Token counting strategies

Category context

Code helpers, APIs, CLIs, browser automation, testing, and developer operations.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
3 Docs1 Scripts
  • SKILL.md Primary doc
  • references/memory_standards.md Docs
  • references/token_estimation.md Docs
  • scripts/prompt_assemble.py Scripts