# Send Telnyx Rag to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "telnyx-rag",
    "name": "Telnyx Rag",
    "source": "tencent",
    "type": "skill",
    "category": "效率提升",
    "sourceUrl": "https://clawhub.ai/teamtelnyx/telnyx-rag",
    "canonicalUrl": "https://clawhub.ai/teamtelnyx/telnyx-rag",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/telnyx-rag",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=telnyx-rag",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "SKILL.md",
      "ask.py",
      "config.json",
      "diagram.svg",
      "embed.sh",
      "search.py"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "slug": "telnyx-rag",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-05-01T11:44:36.776Z",
      "expiresAt": "2026-05-08T11:44:36.776Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=telnyx-rag",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=telnyx-rag",
        "contentDisposition": "attachment; filename=\"telnyx-rag-1.0.1.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "telnyx-rag"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/telnyx-rag"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/telnyx-rag",
    "downloadUrl": "https://openagent3.xyz/downloads/telnyx-rag",
    "agentUrl": "https://openagent3.xyz/skills/telnyx-rag/agent",
    "manifestUrl": "https://openagent3.xyz/skills/telnyx-rag/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/telnyx-rag/agent.md"
  }
}
```
## Documentation

### Telnyx RAG Memory

Semantic search and RAG-powered Q&A over your OpenClaw workspace using Telnyx's native embedding, similarity search, and inference APIs.

### Requirements

Your own Telnyx API Key — each user/agent uses their own key
Python 3.8+ — stdlib only, no external dependencies
Get your API key at portal.telnyx.com

### Bucket Naming Convention

Use a consistent naming scheme so anyone can adopt this:

openclaw-{agent-id}

AgentBucketChief (main)openclaw-mainBob the Builderopenclaw-builderVoice agentopenclaw-voiceYour agentopenclaw-{your-id}

Why?

Predictable: anyone can find any agent's bucket
Collision-free: scoped to agent, not person or team
Discoverable: openclaw-* prefix groups all agent buckets in Telnyx Storage UI

### Quick Start

cd ~/skills/telnyx-rag

# Set YOUR Telnyx API key (each user/agent uses their own)
echo 'TELNYX_API_KEY=KEY...' > .env

# Run setup with validation
./setup.sh --check    # Validate requirements first
./setup.sh           # Full setup (uses bucket from config.json)

# Search your memory
./search.py "What are my preferences?"

# Ask questions (full RAG pipeline)
./ask.py "What is the porting process?"

### What It Does

Indexes your workspace files (MEMORY.md, memory/*.md, knowledge/, skills/)
Chunks large files intelligently (markdown by headers, JSON/Slack by threads)
Embeds content automatically using Telnyx AI
Searches using natural language queries with retry logic
Answers questions using a full RAG pipeline (retrieve → rerank → generate)
Prioritizes results from memory/ (your primary context)
Incremental sync — only uploads changed files
Orphan cleanup — removes deleted files from bucket

### Option 1: Environment Variable

export TELNYX_API_KEY="KEY..."
./setup.sh

### Option 2: .env File

echo 'TELNYX_API_KEY=KEY...' > .env
./setup.sh

### Validation Mode

./setup.sh --check    # Validate requirements without making changes

### Custom Bucket Name

./setup.sh my-custom-bucket

### Ask Questions (RAG Pipeline)

# Basic question answering
./ask.py "What is Telnyx's porting process?"

# Show retrieved context alongside answer
./ask.py "How do I deploy?" --context

# Use a different model
./ask.py "Explain voice setup" --model meta-llama/Meta-Llama-3.1-8B-Instruct

# More/fewer context chunks
./ask.py "meeting decisions" --num 12

# JSON output for scripting
./ask.py "API usage limits" --json

# Search a different bucket
./ask.py "project timeline" --bucket work-memory

### Search Memory

# Basic search with improved error handling
./search.py "What are David's communication preferences?"

# Search specific bucket
./search.py "meeting notes" --bucket my-other-bucket

# More results with timeout control
./search.py "procedures" --num 10 --timeout 45

# JSON output (for scripts)
./search.py "procedures" --json

### Sync Files (with Chunking)

# Incremental sync with auto-chunking
./sync.py

# Override chunk size (tokens)
./sync.py --chunk-size 600

# Quiet mode for cron jobs
./sync.py --quiet

# Remove orphaned files (including stale chunks)
./sync.py --prune

# Sync + trigger embedding
./sync.py --embed

# Check status
./sync.py --status

# List indexed files (shows chunks too)
./sync.py --list

### Watch Mode

# Watch for changes and auto-sync with chunking
./sync.py --watch

### Trigger Embedding

# Trigger embedding for current bucket
./embed.sh
# OR
./sync.py --embed

# Check embedding status
./sync.py --embed-status <task_id>

Why is this needed? Uploading files to Telnyx Storage doesn't automatically generate embeddings. The embedding process converts your files into searchable vectors. Without this step, search.py and ask.py won't return results.

### Configuration

Edit config.json to customize behavior:

{
  "bucket": "openclaw-memory",
  "region": "us-central-1",
  "workspace": ".",
  "patterns": [
    "MEMORY.md",
    "memory/*.md",
    "knowledge/*.json",
    "skills/*/SKILL.md"
  ],
  "priority_prefixes": ["memory/", "MEMORY.md"],
  "default_num_docs": 5,
  "chunk_size": 800,
  "ask_model": "meta-llama/Meta-Llama-3.1-70B-Instruct",
  "ask_num_docs": 8,
  "retrieve_num_docs": 20
}

### Config Fields

FieldDefaultDescriptionbucketopenclaw-{agent-id}Telnyx Storage bucket name (see naming convention)regionus-central-1Storage regionworkspace.Root directory to scan for filespatterns(see above)Glob patterns for files to indexpriority_prefixes["memory/", "MEMORY.md"]Sources to rank higher in resultsexclude["*.tmp", ...]Patterns to excludechunk_size800Target tokens per chunk (~4 chars/token)ask_modelMeta-Llama-3.1-70B-InstructLLM model for ask.pyask_num_docs8Final context chunks for LLMretrieve_num_docs20Initial retrieval count (before reranking)

### How It Works

┌─────────────────┐     ┌──────────────────────────────────┐
│  Your Workspace │     │     Telnyx Cloud                 │
│  ├── memory/    │     │                                  │
│  ├── knowledge/ │──┐  │  Storage: your-bucket/           │
│  └── skills/    │  │  │     └── file__chunk-001.md       │
└─────────────────┘  │  │     └── file__chunk-002.md       │
                     │  │              │                    │
   Smart Chunking ◀──┘  │              ▼ embed             │
   ├── Markdown: split   │     Telnyx AI Embeddings        │
   │   on ## headers     │              │                  │
   ├── JSON/Slack: split │              ▼                  │
   │   by thread/time    │     Similarity Search           │
   └── Metadata tags     │              │                  │
                         └──────────────┼──────────────────┘
                                        │
   ask.py Pipeline:                     │
   ┌─────────────────────────────────┐  │
   │ 1. Retrieve top-20 chunks ◀────┘  │
   │ 2. Rerank (TF-IDF + priority)     │
   │ 3. Deduplicate adjacent chunks    │
   │ 4. Build prompt with top-8        │
   │ 5. Call Telnyx Inference LLM      │
   │ 6. Return answer + sources        │
   └─────────────────────────────────┘

### Smart Chunking

Large files are automatically split into semantic chunks before upload:

### Markdown Files

Split on ## and ### headers first
If a section is still too large, split by paragraph boundaries
Each chunk gets a metadata header with source, chunk index, and title

### JSON / Slack Exports

Messages grouped by token budget per chunk
Extracts: channel name, date range, authors
Metadata includes Slack-specific fields

### Chunk Naming

Chunks use deterministic filenames:

knowledge/meetings.md  →  knowledge/meetings__chunk-001.md
                          knowledge/meetings__chunk-002.md
                          knowledge/meetings__chunk-003.md

### Chunk Metadata

Each chunk includes a YAML-style header:

---
source: knowledge/meetings.md
chunk: 2/5
title: Q4 Planning Discussion
---

(chunk content here)

For Slack exports, additional fields:

---
source: slack/general.json
chunk: 3/12
title: general
channel: general
date_range: 2024-01-15 to 2024-01-16
authors: alice, bob, charlie
---

### Chunk Lifecycle

When a source file changes, old chunks are deleted and new ones uploaded
Chunk mappings tracked in .sync-state.json
--prune cleans up orphaned chunks from deleted files

### Reranking (ask.py)

The RAG pipeline uses a multi-signal reranking strategy:

Semantic similarity — Telnyx embedding distance (certainty score)
Keyword overlap — TF-IDF weighted term matching with the query
Priority boost — Chunks from priority_prefixes sources ranked higher
Deduplication — Adjacent chunks from the same source with >80% token overlap are merged

Initial retrieval fetches retrieve_num_docs (default 20), reranking selects the best ask_num_docs (default 8) for the LLM prompt.

### Smart Chunking

Semantic splitting: Headers for markdown, threads for Slack JSON
Metadata headers: Source, chunk index, title in every chunk
Configurable size: --chunk-size flag or chunk_size in config
Deterministic names: Reproducible chunk filenames

### RAG Q&A Pipeline (ask.py)

End-to-end: Query → retrieve → rerank → generate → answer
Telnyx Inference: Uses Telnyx LLM API for generation
Source references: Every answer includes source file citations
Context mode: --context shows retrieved chunks
JSON output: --json for structured responses

### Reranking

Multi-signal scoring: Combines embedding similarity + keyword overlap + priority
Deduplication: Removes near-identical adjacent chunks
Configurable: Retrieve 20, use best 8 (tunable)

### Incremental Sync (v1)

File hashing: Tracks SHA-256 hashes in .sync-state.json
Skip unchanged: Only uploads modified files
Progress tracking: Shows progress bars for large syncs

### Smart Cleanup

--prune: Removes files from bucket that were deleted locally
Chunk-aware: Cleans up orphaned chunks too
State tracking: Maintains sync history and chunk mappings

### Improved Reliability

Retry logic: 3 attempts with exponential backoff
Better errors: Parses Telnyx API error responses
Timeout control: Configurable request timeouts
Quiet mode: --quiet flag for cron jobs

### OpenClaw Integration

Add to your TOOLS.md:

## Semantic Memory & Q&A

Ask questions about your workspace:
\\\`\\\`\\\`bash
cd ~/skills/telnyx-rag && ./ask.py "your question"
\\\`\\\`\\\`

Search memory semantically:
\\\`\\\`\\\`bash
cd ~/skills/telnyx-rag && ./search.py "your query"
\\\`\\\`\\\`

### Automated Sync

Add to your heartbeat or cron:

# Quiet sync with orphan cleanup
cd ~/skills/telnyx-rag && ./sync.py --quiet --prune

# Sync with embedding
cd ~/skills/telnyx-rag && ./sync.py --quiet --embed

### Setup Issues

"Python version too old"

Requires Python 3.8+
Check: python3 --version

"API key test failed"

Verify key: echo $TELNYX_API_KEY
Get new key at portal.telnyx.com

### Sync Issues

"Bucket not found"

./sync.py --create-bucket

"No results found"

Wait 1-2 minutes after sync (embeddings take time)
Check files uploaded: ./sync.py --list
Trigger embedding: ./sync.py --embed

"Files not syncing"

Check .sync-state.json for corruption
Force re-sync: rm .sync-state.json && ./sync.py

### Ask Issues

"LLM generation failed"

Check API key has inference permissions
Try a different model: ./ask.py "query" --model meta-llama/Meta-Llama-3.1-8B-Instruct

"No relevant documents found"

Ensure files are synced and embedded
Try broader query terms

### From Python

from ask import ask
from search import search_memory

# Ask a question (full RAG pipeline)
answer = ask("What is the deployment process?")
print(answer)

# With options
answer = ask(
    "project timeline",
    num_final=5,
    model="meta-llama/Meta-Llama-3.1-8B-Instruct",
    show_context=True,
    output_json=True,
)
print(answer)

# Basic search
results = search_memory("What do I know about X?", num_docs=5)
print(results)

### From Bash

# Ask and capture answer
answer=$(./ask.py "What are the API limits?" --json)

# Search and capture JSON
results=$(./search.py "query" --json)

### Performance Tips

Tune chunk_size — Smaller chunks (400-600) for precise retrieval, larger (800-1200) for more context
Use --quiet for cron jobs to reduce output
Enable --prune periodically to clean up deleted files
Watch mode is great for development: ./sync.py --watch
Batch embedding by syncing first, then embedding: ./sync.py && ./sync.py --embed

### Credits

Built for OpenClaw using Telnyx Storage and AI APIs.
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: teamtelnyx
- Version: 1.0.1
## Source health
- Status: healthy
- Item download looks usable.
- Yavira can redirect you to the upstream package for this item.
- Health scope: item
- Reason: direct_download_ok
- Checked at: 2026-05-01T11:44:36.776Z
- Expires at: 2026-05-08T11:44:36.776Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/telnyx-rag)
- [Send to Agent page](https://openagent3.xyz/skills/telnyx-rag/agent)
- [JSON manifest](https://openagent3.xyz/skills/telnyx-rag/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/telnyx-rag/agent.md)
- [Download page](https://openagent3.xyz/downloads/telnyx-rag)