Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Build, integrate, debug, and secure MCP servers and clients in any language, enabling AI agents to call external tools via Model Context Protocol.
Build, integrate, debug, and secure MCP servers and clients in any language, enabling AI agents to call external tools via Model Context Protocol.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Build, integrate, secure, and scale MCP servers and clients. From first server to production multi-tool architecture.
Building an MCP server (any language) Integrating MCP tools into an AI agent Debugging MCP connection/auth issues Designing multi-server architectures Securing MCP endpoints for production Evaluating which MCP servers to use
Model Context Protocol = standardized way for AI agents to call external tools. Think of it as "USB for AI" β one protocol, any tool.
Agent (Client) ββ MCP Transport ββ MCP Server ββ External Service (stdio/HTTP) (your code) (API, DB, file system)
ConceptWhat It DoesExampleServerExposes tools, resources, promptsA server wrapping the GitHub APIClientDiscovers and calls server capabilitiesOpenClaw, Claude Desktop, CursorToolA callable function with typed paramscreate_issue(title, body, labels)ResourceRead-only data the agent can accessfile://workspace/config.jsonPromptReusable prompt templatessummarize_pr(pr_url)TransportHow clientβserver communicatestdio (local) or HTTP+SSE (remote)
FactorstdioHTTP/SSEStreamable HTTPSetup complexityLowMediumMediumMulti-clientNoYesYesRemote accessNoYesYesStreamingVia stdioSSENativeAuth neededNo (local)YesYesBest forLocal dev, single agentProduction, sharedModern production Rule: Start with stdio for development. Move to HTTP for production or multi-agent.
server_name: "[service]-mcp" description: "[What this server does in one sentence]" transport: stdio | http tools: - name: "[verb_noun]" description: "[What it does β be specific for LLM tool selection]" params: - name: "[param]" type: "string | number | boolean | object | array" required: true | false description: "[What this param controls]" returns: "[What the tool returns]" error_cases: - "[When/how it fails]" resources: - uri: "[protocol://path]" description: "[What data this exposes]" external_dependencies: - "[API/service this wraps]" auth_required: true | false auth_method: "api_key | oauth2 | none"
// server.ts β minimal MCP server import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; import { z } from "zod"; const server = new McpServer({ name: "my-service", version: "1.0.0", }); // Define a tool server.tool( "get_item", // tool name (verb_noun) "Fetch an item by ID", // description (LLM reads this) { id: z.string().describe("Item ID") }, // params with descriptions async ({ id }) => { try { const result = await fetchItem(id); return { content: [{ type: "text", text: JSON.stringify(result, null, 2) }], }; } catch (error) { return { content: [{ type: "text", text: `Error: ${error.message}` }], isError: true, }; } } ); // Define a resource server.resource( "config", "config://app", async (uri) => ({ contents: [{ uri: uri.href, mimeType: "application/json", text: JSON.stringify(config) }], }) ); // Start const transport = new StdioServerTransport(); await server.connect(transport);
# server.py β minimal MCP server from mcp.server import Server from mcp.server.stdio import stdio_server from mcp.types import Tool, TextContent import json server = Server("my-service") @server.list_tools() async def list_tools(): return [ Tool( name="get_item", description="Fetch an item by ID", inputSchema={ "type": "object", "properties": { "id": {"type": "string", "description": "Item ID"} }, "required": ["id"] } ) ] @server.call_tool() async def call_tool(name: str, arguments: dict): if name == "get_item": result = await fetch_item(arguments["id"]) return [TextContent(type="text", text=json.dumps(result, indent=2))] raise ValueError(f"Unknown tool: {name}") async def main(): async with stdio_server() as (read, write): await server.run(read, write, server.create_initialization_options()) if __name__ == "__main__": import asyncio asyncio.run(main())
Verb-noun naming: create_issue, search_docs, update_config β never issue or doStuff Descriptions are critical: The LLM picks tools based on descriptions. Be specific. Include when NOT to use. Granular over god-tools: search_issues + get_issue + create_issue beats manage_issues Return structured data: JSON over prose. Let the LLM format for the user. Error messages for LLMs: Include what went wrong AND what to try next Idempotent where possible: create_or_update > create (prevents duplicates from retries) Limit output size: Paginate or truncate. A 10MB response kills the context window. Include examples in descriptions: "Search issues. Example: search_issues(query='bug label:critical')"
Says what the tool DOES (not just the name restated) Mentions when to use vs. when NOT to use Each param has a description with format hints Return format is documented Edge cases mentioned (empty results, not found, etc.)
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js"; import express from "express"; const app = express(); app.use(express.json()); const server = new McpServer({ name: "my-service", version: "1.0.0" }); // ... register tools ... app.post("/mcp", async (req, res) => { const transport = new StreamableHTTPServerTransport("/mcp", res); await server.connect(transport); await transport.handleRequest(req, res); }); app.listen(3001, () => console.log("MCP server on :3001"));
API Key (simplest) // Middleware function authMiddleware(req, res, next) { const key = req.headers["x-api-key"] || req.headers.authorization?.replace("Bearer ", ""); if (!key || !validKeys.has(key)) { return res.status(401).json({ error: "Invalid API key" }); } req.userId = keyToUser.get(key); next(); } OAuth 2.0 (for user-scoped access) # MCP OAuth flow 1. Client requests tool β server returns 401 with auth URL 2. User completes OAuth in browser β gets access token 3. Client stores token, includes in subsequent requests 4. Server validates token, calls external API on user's behalf
Rate limiting per client/key Request validation (schema check before execution) Structured logging (request ID, tool name, latency, status) Health check endpoint (/health) Graceful shutdown (finish in-flight requests) Timeout on external calls (don't let tools hang forever) Output size limits (truncate large responses) Error categorization (4xx client vs 5xx server) CORS if browser clients connect TLS in production (always HTTPS)
# In openclaw config β stdio server mcpServers: my-service: command: "node" args: ["path/to/server.js"] env: API_KEY: "{{env.MY_SERVICE_API_KEY}}" # HTTP server mcpServers: my-service: url: "https://mcp.myservice.com/mcp" headers: Authorization: "Bearer {{env.MY_SERVICE_TOKEN}}"
{ "mcpServers": { "my-service": { "command": "node", "args": ["/path/to/server.js"], "env": { "API_KEY": "your-key" } } } }
When multiple MCP servers are connected, the agent sees ALL tools. Help the agent pick correctly: Unique tool names: Prefix if needed (github_search vs jira_search) Clear descriptions: Disambiguate similar tools across servers Don't overload: 20-30 tools max across all servers. Beyond that, agents get confused.
Agent βββ github-mcp (code: create_pr, search_code, list_issues) βββ slack-mcp (comms: send_message, search_messages) βββ postgres-mcp (data: query, list_tables) βββ internal-mcp (business: get_customer, update_pipeline) Principle: One server per domain. Don't build a mega-server.
/ E2E \ Agent actually uses the tool / Integration \ Tool calls real API (sandbox) / Unit \ Business logic without MCP layer
// Test the tool handler directly, no MCP transport describe("get_item", () => { it("returns item when found", async () => { mockDb.findById.mockResolvedValue({ id: "123", name: "Test" }); const result = await getItemHandler({ id: "123" }); expect(result.content[0].text).toContain("Test"); }); it("returns error for missing item", async () => { mockDb.findById.mockResolvedValue(null); const result = await getItemHandler({ id: "missing" }); expect(result.isError).toBe(true); }); it("handles API timeout gracefully", async () => { mockDb.findById.mockRejectedValue(new Error("timeout")); const result = await getItemHandler({ id: "123" }); expect(result.isError).toBe(true); expect(result.content[0].text).toContain("try again"); }); });
# Use the MCP Inspector to manually test npx @modelcontextprotocol/inspector node server.js # Or use mcporter for CLI testing mcporter call my-service.get_item id=123 mcporter list my-service --schema # verify tool schemas
Happy path returns expected format Missing required params returns clear error Invalid param types return clear error Not-found cases handled (don't throw, return error content) Rate limit / quota exceeded handled Auth failure handled (expired token, invalid key) Large response truncated appropriately Timeout handled (external API slow) Concurrent calls don't interfere
Wrap an existing REST/GraphQL API as MCP tools. External API β MCP Server β Agent Key decisions: Map 1 API endpoint β 1 MCP tool (usually) Simplify params (agent doesn't need every API option) Aggregate related calls (e.g., get user + get user's repos = 1 tool) Cache where safe (reduce API calls)
Database β MCP Server β Agent Safety rules: Read-only by default. Write tools require explicit opt-in. Parameterized queries only. NEVER interpolate agent input into SQL. Row limit on all queries (agent can ask for more if needed). Schema as a resource (let agent discover tables/columns).
File System β MCP Server β Agent Safety rules: Sandbox to specific directories. Never allow ../ traversal. Read-only by default. Write requires allowlist. Size limits on reads. Don't send 1GB files through MCP.
Some tools need to orchestrate multiple steps: server.tool("deploy_service", "Build, test, and deploy a service", { service: z.string(), environment: z.enum(["staging", "production"]), }, async ({ service, environment }) => { // Step 1: Build const buildResult = await build(service); if (!buildResult.success) return error(`Build failed: ${buildResult.error}`); // Step 2: Test const testResult = await runTests(service); if (!testResult.success) return error(`Tests failed: ${testResult.summary}`); // Step 3: Deploy (only if build + tests pass) if (environment === "production") { // Extra safety: require confirmation resource return { content: [{ type: "text", text: `Ready to deploy ${service} to production. Tests: ${testResult.passed}/${testResult.total} passed. Call confirm_deploy to proceed.` }] }; } const deployResult = await deploy(service, environment); return success(`Deployed ${service} to ${environment}: ${deployResult.url}`); });
Combine multiple data sources into unified tools: GitHub + Jira + PagerDuty β DevOps MCP Server β Agent One get_service_status tool that queries all three and returns a unified view.
ThreatRiskMitigationPrompt injection via tool outputAgent executes malicious instructions in API responseSanitize output, strip HTML/scriptsExcessive permissionsTool has write access it shouldn'tPrinciple of least privilege per toolData exfiltrationAgent sends sensitive data to wrong toolTool allowlists, audit loggingDenial of serviceAgent calls tool in infinite loopRate limiting, circuit breakersCredential leakageAPI keys in tool responsesStrip sensitive fields from outputSSRFAgent provides URL that hits internal networkURL allowlisting, no private IPs
Every tool has minimum required permissions Write operations require explicit confirmation or are behind feature flags API keys/secrets NEVER appear in tool responses Output sanitized (no HTML, no executable content) Rate limits per tool AND per client Audit log: who called what tool, when, with what params Input validation before any external call URL parameters validated against allowlist (prevent SSRF) Timeout on every external call (max 30s default) Circuit breaker: disable tool if error rate > 50% for 5 min
β server.tool("execute_sql", ..., async ({ query }) => db.raw(query)) β server.tool("run_command", ..., async ({ cmd }) => exec(cmd)) β server.tool("fetch_url", ..., async ({ url }) => fetch(url)) // SSRF β server.tool("write_file", ..., async ({ path, content }) => fs.writeFile(path, content))
β Parameterized queries with allowlisted tables β Predefined commands with argument validation β URL allowlist + no private IP ranges β Write to specific directory + filename validation
SymptomLikely CauseFixTool not appearing in agentSchema error / server not connectedCheck mcporter list or client logs"Connection refused"Server not running or wrong portVerify process, check portTool times outExternal API slow or hangingAdd timeout, check API health"Invalid params"Schema mismatch between client/serverVerify schema with --schema flagAgent picks wrong toolAmbiguous descriptionsRewrite descriptions, add "Use this when..."Agent calls tool in loopTool returning confusing errorReturn clearer error with "do NOT retry"Large response crashesNo output truncationAdd pagination or character limitAuth errors intermittentToken expiryImplement token refresh
Verify server starts: node server.js β does it start without errors? List tools: mcporter list my-server --schema β are all tools registered? Call directly: mcporter call my-server.tool_name param=value β does it return expected output? Check client config: Is the server path/URL correct? Are env vars set? Read client logs: Most clients log MCP connection errors Test with Inspector: npx @modelcontextprotocol/inspector for interactive debugging
server.tool("my_tool", description, schema, async (params) => { const requestId = crypto.randomUUID().slice(0, 8); console.error(`[${requestId}] my_tool called:`, JSON.stringify(params)); const start = Date.now(); try { const result = await doWork(params); console.error(`[${requestId}] my_tool success: ${Date.now() - start}ms`); return success(result); } catch (error) { console.error(`[${requestId}] my_tool error: ${error.message} (${Date.now() - start}ms)`); return errorResponse(error.message); } }); Note: Use console.error for logs in stdio transport (stdout is reserved for MCP protocol).
Score 0-5 per dimension: DimensionWhat to CheckMaintainedLast commit < 3 months? Issues addressed? Version > 1.0?SecureNo raw SQL/exec? Auth implemented? Input validated?Well-typedFull JSON Schema for all tools? Descriptions useful?TestedHas tests? CI passing?DocumentedSetup instructions? Tool descriptions? Examples?LightweightMinimal dependencies? Fast startup? Score < 15/30: Build your own. Score 15-24: Use with caution. Score 25+: Good to use.
CategoryUse CaseExamplesCodeGitHub, GitLab, code searchgithub-mcp, gitlab-mcpDataPostgreSQL, SQLite, Snowflakepostgres-mcp, sqlite-mcpCommsSlack, Discord, emailslack-mcp, gmail-mcpDocsNotion, Confluence, Google Docsnotion-mcp, gdocs-mcpDevOpsAWS, GCP, Kubernetes, Terraformaws-mcp, k8s-mcpSearchBrave, Google, vector storesbrave-search, rag-mcpFilesLocal FS, S3, Google Drivefilesystem-mcp, s3-mcpCRMHubSpot, Salesforcehubspot-mcp, sfdc-mcp
Agent βββ¬ββ github-mcp βββ slack-mcp βββ postgres-mcp βββ custom-mcp Best for: Most use cases. Simple, effective.
Agent ββ MCP Gateway βββ¬ββ server-1 βββ server-2 βββ server-3 Gateway handles: auth, rate limiting, logging, routing. Best for: Enterprise, multi-tenant, compliance requirements.
Orchestrator Agent βββ Code Agent (github-mcp, gitlab-mcp) βββ Data Agent (postgres-mcp, analytics-mcp) βββ Comms Agent (slack-mcp, email-mcp) Best for: Complex workflows, specialized agents.
Total ToolsRecommendation1-10Great. Agent handles well.10-20Good. Ensure distinct descriptions.20-30Caution. Group by server, review descriptions.30-50Risk. Consider agent-per-domain pattern.50+Dangerous. Agent WILL pick wrong tools. Split or use gateway.
my-mcp-server/ βββ src/ β βββ server.ts # MCP server entry β βββ tools/ # Tool handlers β β βββ search.ts β β βββ create.ts β βββ auth.ts # Auth middleware β βββ config.ts # Configuration βββ tests/ β βββ tools.test.ts β βββ integration.test.ts βββ package.json βββ tsconfig.json βββ README.md # Setup + tool docs βββ LICENSE
# [Service] MCP Server [One sentence: what this enables] ## Quick Start [3 steps max to get running] ## Tools | Tool | Description | Params | |------|-------------|--------| [Table of all tools] ## Configuration [Env vars, auth setup] ## Examples [2-3 real usage examples with agent conversation]
# package.json { "name": "@myorg/service-mcp", "version": "1.0.0", "bin": { "service-mcp": "./dist/server.js" }, "files": ["dist"], "keywords": ["mcp", "model-context-protocol", "ai-tools"] } npm publish
DimensionWeightWhat to ScoreTool design20%Names, descriptions, granularity, paramsSecurity20%Auth, input validation, output sanitization, least privilegeReliability15%Error handling, timeouts, circuit breakersTesting15%Unit + integration coverage, edge casesDocumentation10%Setup, tool docs, examplesPerformance10%Response time, output size, cachingMaintainability10%Code structure, types, logging Score 0-40: Not production ready. 40-70: Usable with caveats. 70-90: Solid. 90+: Excellent.
MistakeFixGod-tool that does everythingSplit into focused toolsVague tool descriptionsWrite descriptions as if explaining to a new hireNo error handlingEvery external call wrapped in try/catchReturning raw API responsesShape output for agent consumptionNo rate limitingAdd per-tool and per-client limitsIgnoring output sizePaginate or truncate responsesHardcoded credentialsUse env vars or secret managerNo loggingCan't debug what you can't seeTesting only happy pathTest errors, timeouts, edge casesBuilding before checkingSearch for existing MCP server first
"Build an MCP server for [service]" β Use Phase 2 templates "Add a tool to my MCP server" β Follow tool design rules "Secure my MCP server" β Phase 7 checklist "Debug MCP connection issue" β Phase 8 workflow "Evaluate this MCP server" β Phase 9 scoring "Design multi-server architecture" β Phase 10 patterns "Publish my MCP server" β Phase 11 structure "Convert REST API to MCP" β Phase 6 Pattern 1 "Add auth to my MCP server" β Phase 3 auth patterns "Test my MCP server" β Phase 5 checklist "How many tools is too many?" β Phase 10 tool count table "Review my tool descriptions" β Phase 2 quality checklist
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.