← All skills
Tencent SkillHub Β· AI

Samvida

Generate an agentic contract (llms.txt) for any business website. Crawls the site, fills gaps conversationally, and produces a structured agent-optimized llm...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Generate an agentic contract (llms.txt) for any business website. Crawls the site, fills gaps conversationally, and produces a structured agent-optimized llm...

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
README.md, SKILL.md, _meta.json, package.json, references/cloudflare_api.md, references/llms_txt_spec.md

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
0.3.3

Documentation

ClawHub primary doc Primary doc: SKILL.md 12 sections Open source page

Overview

This skill crawls a business website, extracts structured information, and generates a properly formatted llms.txt file β€” the standard that makes any business readable and transactable by AI agents. It follows the llmstxt.org specification with business-specific extensions: ## Team β€” builds agent trust in the people behind the business ## Clients & Testimonials β€” social proof for agent decision-making ## For Agents β€” how agents can interact (or a clear "coming soon" notice) Read references/llms_txt_spec.md before generating any output.

Step 1 β€” Get the URL

If the user didn't provide a URL, ask: "What's the website URL?" Normalize it (add https:// if missing).

Step 2 β€” Crawl

Run the crawler: ~/.virtualenvs/samvida/bin/python3 \ ~/.openclaw/workspace/samvida/scripts/crawl.py \ {url} > /tmp/llms_business_info.json Read /tmp/llms_business_info.json. Note: What pages were crawled What was found vs missing (team, pricing, testimonials, API) Whether an existing llms.txt was found Tell the user briefly: "Crawled {domain} ({N} pages). Found: {what was found}. I'll ask about a few things I couldn't determine." If the crawl found an existing llms.txt, note it: "I noticed you already have a llms.txt at {domain}/llms.txt. I'll generate a fresh one β€” you can compare and decide which to keep."

Step 3 β€” Ask for additional sources (always ask this first)

"Are there any other pages I should read? (docs, API reference, existing llms.txt, press page β€” anything useful)" If they provide URLs, re-run the crawl with those extras: ~/.virtualenvs/samvida/bin/python3 \ ~/.openclaw/workspace/samvida/scripts/crawl.py \ {url} {extra_url1} {extra_url2} > /tmp/llms_business_info.json If they say no/skip, continue.

Step 4 β€” Generate Pass 1 draft + gap report

Generate a draft llms.txt now using what you have from the crawl. Use all heuristic signals (team_found, testimonials_found, pricing_found, etc.) and the raw_text_summary. Write the draft. For any section you couldn't populate confidently, use a clear [NOT FOUND] placeholder. Then show it to the user with a gap report: "Here's a first draft of your llms.txt: {draft} Found automatically: {brief list β€” e.g. emails, pricing page, testimonials from Wybrid + Cital} Couldn't determine: {brief list β€” e.g. team, pricing figures, API} Two questions to start: {Most important gap β€” e.g. "Who's on the founding team? Names, roles, and an email if you're comfortable."} {Second most important β€” e.g. "What's your pricing model? Even a rough description β€” per-candidate, subscription, etc."} _(I have a few more after these. Also β€” say 'dig deeper' if you'd rather I try to find it myself.)"

Step 4b β€” Handle "dig deeper" (Pass 2)

If the user says "dig deeper" (or similar β€” "try again", "re-crawl", "look harder"): Re-run the crawl in deep mode: ~/.virtualenvs/samvida/bin/python3 \ ~/.openclaw/workspace/samvida/scripts/crawl.py \ {url} {extra_urls} --deep > /tmp/llms_business_info.json This returns pages_raw β€” the full raw text of every crawled page. Use it to extract structure with the LLM. In your generation prompt (Step 5), add: In addition to the heuristic signals, here is the full raw text from each crawled page. Extract team members, testimonials, pricing details, and any API information directly from this text. Homepage raw text: {pages_raw[homepage_url]} Team page raw text (if available): {pages_raw[team_url]} Pricing page raw text (if available): {pages_raw[pricing_url]} Tell the user: "Doing a deeper crawl β€” this takes a bit longer but I'll extract everything I can from the raw page content." After Pass 2, show the updated draft with the same gap report format. Whatever still can't be found, ask the user directly.

Step 5 β€” Conversational gap-filling (for anything still missing)

Ask questions one at a time β€” only for things still [NOT FOUND] after Pass 1/2. Wait for each answer. Stop as soon as you have enough to finalize. Use your judgment β€” if the user has already filled most gaps conversationally, skip remaining questions and generate. Q1 β€” Core value for agents (always ask): "In one or two sentences: what should an AI agent understand about what it can do or get by working with {domain}?" Q2 β€” Team (ask if team not found in crawl): "I didn't find team info publicly. Want to add a Team section? It helps agents trust who's behind the business. Just names, roles, and emails if you're comfortable." Q3 β€” Clients / testimonials (ask if not found): "Any existing clients or testimonials I can include? Even a couple of company names or a one-line quote builds agent trust. Totally optional." Q4 β€” API / integration (ask if api_found=false): "Is there a public API or docs page agents can reference? (skip if not applicable)" Q5 β€” Pricing (ask if pricing_found=false): "What's the pricing model? Even a rough description helps β€” like 'per assessment' or 'monthly subscription'." Q6 β€” ICP / agent-buyers (ask if not obvious from context): "Who are the kinds of agents or automated systems most likely to want to work with you? (e.g. HR bots, recruiting pipelines)" Q7 β€” Anything else (optional, ask last): "Anything else agents should know before working with you? (geographic limits, onboarding steps, etc.)"

Step 6 β€” Generate final llms.txt

Read references/llms_txt_spec.md now if you haven't already. Generate the complete llms.txt using ALL information gathered: The crawled business_info JSON (and pages_raw if deep mode ran) The user's answers from the conversation The spec from references/llms_txt_spec.md Generation rules: Follow the spec format exactly: H1 title β†’ blockquote summary β†’ H2 sections β†’ named links Every bullet = - [Title](url): description β€” no plain text bullets Section order: Services β†’ Team β†’ Clients & Testimonials β†’ Compliance β†’ Reviews β†’ For Agents β†’ Pricing β†’ API β†’ Links β†’ Optional ## Team: Always include. Use crawled/user-provided data. If none available, omit silently. ## Clients & Testimonials: Always try to include. Structure: ICP bullets first (who the business serves) Then a ### subsection per named client where you have a real quote or case study detail Each subsection: blockquote with verbatim/lightly-cleaned quote, optional Problem: and Outcome: lines If you only have a name + one-liner with no detail, a single bullet is fine Never invent quotes or outcomes ## Compliance: Include if any certifications or standards (SOC 2, ISO 27001, GDPR, HIPAA, etc.) are mentioned anywhere on the site or by the user. Omit if none found. ## Reviews: Include if any third-party ratings, scores, awards, or recognitions (G2, ProductHunt, Trustpilot, Gartner, Capterra, Forbes, YC, etc.) are mentioned. Omit if none found. ## For Agents: ALWAYS include. If no API info: add the "coming soon" notice + contact email. Never skip. ## Pricing: If unknown, link to pricing page with no summary. If no pricing page, omit. ## API: Document URL only β€” no auth details, no secrets. ## Optional: FAQs, blog, case studies, anything supplementary. Do NOT invent facts. If something is unknown and user didn't provide it, either omit it or note it clearly. Keep it tight β€” this is for agents, not humans. No marketing fluff. Write the final llms.txt to /tmp/samvida_llms.txt.

Step 7 β€” Show and confirm

Show the full llms.txt to the user in a code block, then ask: "Here's your llms.txt πŸ‘† Does this look right? You can: Tell me what to change Say 'save' to download it Say 'deploy' when you're ready to push it live (Phase 2)"

Step 8 β€” Handle revisions

If the user asks for changes, make them and show the updated version. Repeat until satisfied. If they say 'save': tell them the file is at /tmp/samvida_llms.txt and they can copy it to their project. If they say 'deploy': proceed to Step 9.

Step 9 β€” Deploy

If an existing llms.txt was found during crawl, warn first: "⚠️ I found an existing llms.txt at {domain}/llms.txt. Deploying will replace it. Want to see a diff first, or go ahead?" Show a simple diff if requested (old vs new, first 20 lines each). First, detect the platform β€” check the crawl data for CMS detection, or ask: "Which platform is {domain} hosted on? (Webflow / Framer / Cloudflare / other)" Then follow the relevant path below. 9a β€” Cloudflare Workers (any site with Cloudflare DNS) Best for: any site whose DNS goes through Cloudflare (the orange cloud ☁️ is enabled). "To deploy to {domain}/llms.txt via Cloudflare Workers, I need 3 things: API Token β€” Cloudflare dashboard β†’ My Profile β†’ API Tokens β†’ Create Token β†’ 'Edit Cloudflare Workers' template Account ID β€” top-right of your Cloudflare dashboard Zone ID β€” Cloudflare dashboard β†’ click your domain β†’ right sidebar under 'API' These are only used for this deployment and never stored." ~/.virtualenvs/samvida/bin/python3 \ ~/.openclaw/workspace/samvida/scripts/deploy.py \ --provider cloudflare \ --llms-txt /tmp/samvida_llms.txt \ --cf-token "{token}" \ --account-id "{account_id}" \ --zone-id "{zone_id}" \ --domain "{domain}" 9b β€” Webflow (fully automated) Best for: sites hosted on Webflow (webflow.io or custom domain via Webflow hosting). "To deploy to Webflow, I need your Webflow Site API Token: Webflow dashboard β†’ your site β†’ Site Settings β†’ Integrations β†’ API Access β†’ Generate API Token Scopes to enable: Assets (Read/Write), Sites (Read), Redirects (Read/Write), Publishing (Publish) Optionally: your Site ID (visible in the Webflow dashboard URL β€” auto-detected if omitted)." ~/.virtualenvs/samvida/bin/python3 \ ~/.openclaw/workspace/samvida/scripts/deploy.py \ --provider webflow \ --llms-txt /tmp/samvida_llms.txt \ --webflow-token "{token}" \ --domain "{domain}" # --site-id "{site_id}" # optional How it works: Uploads llms.txt to Webflow's CDN β†’ adds a 301 redirect /llms.txt β†’ CDN URL β†’ publishes. Agents follow the redirect transparently. Note: Redirect API requires Webflow Basic plan or above. If the user is on Starter, Samvida will output manual redirect steps. 9c β€” Framer (instructions-only) Framer has no public REST API for file hosting or redirect management. No credentials needed β€” just run the script and relay the output. ~/.virtualenvs/samvida/bin/python3 \ ~/.openclaw/workspace/samvida/scripts/deploy.py \ --provider framer \ --llms-txt /tmp/samvida_llms.txt \ --domain "{domain}" The script outputs three options (A/B/C) with step-by-step instructions and prints the full llms.txt content for the user to save. Relay all of it clearly to the user. 9d β€” CMS detected (Cloudflare Worker deployed but CMS takes priority) On CMS detected (output contains SAMVIDA_CMS:{name}): "The Worker deployed successfully, but {CMS} is serving /llms.txt directly from their servers β€” so it takes priority over the Worker. Run the right deploy command for your platform: {paste the CMS-specific instructions from the script output} On any error: relay the script's human-readable error message directly with a suggested fix.

Notes

Existing llms.txt: If the crawl found one, mention it early: "I noticed you already have a llms.txt. I'll generate a fresh one β€” you can compare and decide which to keep." Anchor-only links (e.g. /#section): Skip for Level 2 crawling β€” they don't load new content. The For Agents section is mandatory β€” even if empty of details, it signals intent to support agents and provides a contact path. Never ask all questions at once β€” it's a conversation, not a form.

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
4 Docs2 Config
  • SKILL.md Primary doc
  • README.md Docs
  • references/cloudflare_api.md Docs
  • references/llms_txt_spec.md Docs
  • _meta.json Config
  • package.json Config