← All skills
Tencent SkillHub Β· AI

RAG Eval

Evaluate your RAG pipeline quality using Ragas metrics (faithfulness, answer relevancy, context precision). PREREQUISITE: You must have a RAG system integrat...

skill openclawclawhub Free
0 Downloads
0 Stars
0 Installs
0 Score
High Signal

Evaluate your RAG pipeline quality using Ragas metrics (faithfulness, answer relevancy, context precision). PREREQUISITE: You must have a RAG system integrat...

⬇ 0 downloads β˜… 0 stars Unverified but indexed

Install for OpenClaw

Quick setup
  1. Download the package from Yavira.
  2. Extract the archive and review SKILL.md first.
  3. Import or place the package into your OpenClaw setup.

Requirements

Target platform
OpenClaw
Install method
Manual import
Extraction
Extract archive
Prerequisites
OpenClaw
Primary doc
SKILL.md

Package facts

Download mode
Yavira redirect
Package format
ZIP package
Source platform
Tencent SkillHub
What's included
CHANGELOG.md, PRD.md, README.md, SKILL.md, scripts/batch_eval.py, scripts/run_eval.py

Validation

  • Use the Yavira download entry.
  • Review SKILL.md after the package is downloaded.
  • Confirm the extracted package contains the expected setup assets.

Install with your agent

Agent handoff

Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.

  1. Download the package from Yavira.
  2. Extract it into a folder your agent can access.
  3. Paste one of the prompts below and point your agent at the extracted folder.
New install

I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.

Upgrade existing

I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.

Trust & source

Release facts

Source
Tencent SkillHub
Verification
Indexed source record
Version
1.2.1

Documentation

ClawHub primary doc Primary doc: SKILL.md 10 sections Open source page

RAG Eval β€” Quality Testing for Your RAG Pipeline

Test and monitor your RAG pipeline's output quality.

1. Ask OpenClaw (Recommended)

Tell OpenClaw: "Install the rag-eval skill." The agent will handle the installation and configuration automatically.

2. Manual Installation (CLI)

If you prefer the terminal, run: clawhub install rag-eval

⚠️ Prerequisites

Your OpenClaw must have a RAG system (vector DB + retrieval pipeline). This skill evaluates the output quality of that pipeline β€” it does not provide RAG functionality itself. At least one LLM API key is required β€” Ragas uses an LLM as judge internally. Set one of: OPENAI_API_KEY (default, uses GPT-4o) ANTHROPIC_API_KEY (uses Claude Haiku) RAGAS_LLM=ollama/llama3 (for local/offline evaluation)

Setup (first run only)

bash scripts/setup.sh This installs ragas, datasets, and other dependencies.

Single Response Evaluation

When user asks to evaluate an answer, collect: question β€” the original user question answer β€” the LLM output to evaluate contexts β€” list of text chunks used to generate the answer (retrieved docs) ⚠️ SECURITY: Never interpolate user content directly into shell commands. Write the input to a temp JSON file first, then pipe it to the evaluator: # Step 1: Write input to a temp file (agent should use the write/edit tool, NOT echo) # Write this JSON to /tmp/rag-eval-input.json using the file write tool: # {"question": "...", "answer": "...", "contexts": ["chunk1", "chunk2"]} # Step 2: Pipe the file to the evaluator python3 scripts/run_eval.py < /tmp/rag-eval-input.json # Step 3: Clean up rm -f /tmp/rag-eval-input.json Alternatively, use --input-file: python3 scripts/run_eval.py --input-file /tmp/rag-eval-input.json Output JSON: { "faithfulness": 0.92, "answer_relevancy": 0.87, "context_precision": 0.79, "overall_score": 0.86, "verdict": "PASS", "flags": [] } Post results to user with human-readable summary: πŸ§ͺ Eval Results β€’ Faithfulness: 0.92 βœ… (no hallucination detected) β€’ Answer Relevancy: 0.87 βœ… β€’ Context Precision: 0.79 ⚠️ (some irrelevant context retrieved) β€’ Overall: 0.86 β€” PASS Save to memory/eval-results/YYYY-MM-DD.jsonl.

Batch Evaluation

For a JSONL dataset file (each line: {"question":..., "answer":..., "contexts":[...]}): python3 scripts/batch_eval.py --input references/sample_dataset.jsonl --output memory/eval-results/batch-YYYY-MM-DD.json

Score Interpretation

ScoreVerdictMeaning0.85+βœ… PASSProduction-ready quality0.70-0.84⚠️ REVIEWNeeds improvement< 0.70❌ FAILSignificant quality issues

Faithfulness Deep-Dive

If faithfulness < 0.80, run: python3 scripts/run_eval.py --explain --metric faithfulness This outputs which sentences in the answer are NOT supported by context.

Notes

Ragas uses an LLM internally as judge (uses your configured OpenAI/Anthropic key) Evaluation costs ~$0.01-0.05 per response depending on length For offline use, set RAGAS_LLM=ollama/llama3 in environment

Category context

Agent frameworks, memory systems, reasoning layers, and model-native orchestration.

Source: Tencent SkillHub

Largest current source with strong distribution and engagement signals.

Package contents

Included in package
4 Docs2 Scripts
  • SKILL.md Primary doc
  • CHANGELOG.md Docs
  • PRD.md Docs
  • README.md Docs
  • scripts/batch_eval.py Scripts
  • scripts/run_eval.py Scripts