Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Decompose any text into classified semantic units — authority, risk, attention, entities. No LLM. Deterministic.
Decompose any text into classified semantic units — authority, risk, attention, entities. No LLM. Deterministic.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Decompose any text or URL into classified semantic units. Each unit gets authority level, risk category, attention score, entity extraction, and irreducibility flags. No LLM required. Deterministic. Runs locally.
pip install decompose-mcp
Add to your OpenClaw MCP config: { "mcpServers": { "decompose": { "command": "python3", "args": ["-m", "decompose", "--serve"] } } }
python3 -m decompose --text "The contractor shall provide all materials per ASTM C150-20."
Decompose any text into classified semantic units. Parameters: text (required) — The text to decompose compact (optional, default: false) — Omit zero-value fields for smaller output chunk_size (optional, default: 2000) — Max characters per unit Example prompt: "Decompose this spec and tell me which sections are mandatory" Returns: JSON with units array. Each unit contains: authority — mandatory, prohibitive, directive, permissive, conditional, informational risk — safety_critical, security, compliance, financial, contractual, advisory, informational attention — 0.0 to 10.0 priority score actionable — whether someone needs to act on this irreducible — whether content must be preserved verbatim entities — referenced standards and codes (ASTM, ASCE, IBC, OSHA, etc.) dates — extracted date references financial — extracted dollar amounts and percentages heading_path — document structure hierarchy
Fetch a URL and decompose its content. Handles HTML, Markdown, and plain text. Parameters: url (required) — URL to fetch and decompose compact (optional, default: false) — Omit zero-value fields Example prompt: "Decompose https://spec.example.com/transport and show me the security requirements"
Authority levels — RFC 2119 keywords: "shall" = mandatory, "should" = directive, "may" = permissive Risk categories — safety-critical, security, compliance, financial, contractual Attention scoring — authority weight x risk multiplier, 0-10 scale Standards references — ASTM, ASCE, IBC, OSHA, ACI, AISC, AWS, ISO, EN Financial values — dollar amounts, percentages, retainage, liquidated damages Dates — deadlines, milestones, notice periods Irreducibility — legal mandates, threshold values, formulas that cannot be paraphrased
Pre-process documents before sending to your LLM — save 60-80% of context window Classify specs, contracts, policies, regulations by obligation level Extract standards references and compliance requirements Route high-attention content to specialized analysis chains Build structured training data from raw documents
~14ms average per document on Apple Silicon 1,000+ chars/ms throughput Zero API calls, zero cost, works offline Deterministic — same input always produces same output
Text classification is fully local. The decompose_text tool performs all processing in-process with no network I/O. No data leaves your machine. URL fetching performs outbound HTTP requests. The decompose_url tool fetches the target URL, which necessarily involves network I/O to the specified host. This is why the skill declares the network permission in claw.json. If you do not need URL fetching, you can use decompose_text exclusively with no network access required. SSRF protection. URL fetching blocks private/internal IP ranges before connecting: 0.0.0.0/8, 10.0.0.0/8, 100.64.0.0/10, 127.0.0.0/8, 169.254.0.0/16, 172.16.0.0/12, 192.168.0.0/16, ::1/128, fc00::/7, fe80::/10. The implementation resolves the hostname via DNS before connecting and checks all returned addresses against the blocklist. See src/decompose/mcp_server.py lines 19-49. No API keys or credentials required. No external services are contacted except when using decompose_url to fetch user-specified URLs. Source code is fully auditable. The complete source is published at github.com/echology-io/decompose. The PyPI package is built from this repo via GitHub Actions (publish.yml) using PyPI Trusted Publishers (OIDC), so the published artifact is traceable to a specific commit.
Source Code (GitHub) — full source, auditable PyPI — published via Trusted Publishers Documentation Blog: When Regex Beats an LLM Blog: Why Your Agent Needs a Cognitive Primitive
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.