Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
AI governance and safety layer for OpenClaw agents. Protects against unsafe actions, redacts sensitive data, and generates compliance audit trails.
AI governance and safety layer for OpenClaw agents. Protects against unsafe actions, redacts sensitive data, and generates compliance audit trails.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
OpenClaw is powerful. Tork makes it safe. Enterprise-grade security and governance layer for OpenClaw agents. Detect PII, enforce policies, generate compliance receipts, control tool access, and scan skills for vulnerabilities before installation.
npm install @torknetwork/guardian
import { TorkGuardian } from '@torknetwork/guardian'; const guardian = new TorkGuardian({ apiKey: process.env.TORK_API_KEY!, }); // Govern an LLM request before sending const result = await guardian.governLLM({ messages: [ { role: 'user', content: 'Email john@example.com about the project' }, ], }); // PII is redacted: "Email [EMAIL_REDACTED] about the project" // Check if a tool call is allowed const decision = guardian.governTool({ name: 'shell_execute', args: { command: 'rm -rf /' }, }); // { allowed: false, reason: 'Blocked shell command pattern: "rm -rf"' }
Tork Guardian governs all network activity β port binds, outbound connections, and DNS lookups β with SSRF prevention, reverse shell detection, and per-skill rate limiting.
const guardian = new TorkGuardian({ apiKey: process.env.TORK_API_KEY!, networkPolicy: 'default', }); const network = guardian.getNetworkHandler(); // Validate a port bind const bind = network.validatePortBind('my-skill', 3000, 'tcp'); // { allowed: true, reason: 'Port 3000/tcp bound' } // Validate an outbound connection const egress = network.validateEgress('my-skill', 'api.openai.com', 443); // { allowed: true, reason: 'Egress to api.openai.com:443 allowed' } // Validate a DNS lookup (flags raw IPs) const dns = network.validateDNS('my-skill', 'api.openai.com'); // { allowed: true, reason: 'DNS lookup for api.openai.com allowed' } // Get the full activity log for compliance const log = network.getActivityLog(); // Get a network report with anomaly detection const report = network.getMonitor().getNetworkReport();
import { validatePortBind, validateEgress, validateDNS } from '@torknetwork/guardian'; const config = { apiKey: 'tork_...', networkPolicy: 'strict' as const }; validatePortBind(config, 'my-skill', 3000, 'tcp'); validateEgress(config, 'my-skill', 'api.openai.com', 443); validateDNS(config, 'my-skill', 'api.openai.com');
// Default β balanced for dev & production const guardian = new TorkGuardian({ apiKey: 'tork_...', networkPolicy: 'default', }); // Strict β enterprise lockdown (443 only, explicit domain allowlist) const guardian = new TorkGuardian({ apiKey: 'tork_...', networkPolicy: 'strict', }); // Custom β override any setting const guardian = new TorkGuardian({ apiKey: 'tork_...', networkPolicy: 'custom', allowedOutboundPorts: [443, 8443], allowedDomains: ['api.myservice.com'], maxConnectionsPerMinute: 30, }); See docs/NETWORK-SECURITY.md for full details on threat coverage, policy comparison, and compliance receipts.
Pre-built configurations for common environments: import { MINIMAL_CONFIG, DEVELOPMENT_CONFIG, PRODUCTION_CONFIG, ENTERPRISE_CONFIG, } from '@torknetwork/guardian'; ConfigPolicyNetworkDescriptionMINIMAL_CONFIGstandarddefaultJust an API key, all defaultsDEVELOPMENT_CONFIGminimaldefaultPermissive policies, full loggingPRODUCTION_CONFIGstandarddefaultBlocked exfil domains (pastebin, ngrok, burp)ENTERPRISE_CONFIGstrictstrictExplicit domain allowlist, 20 conn/min, TLS only import { TorkGuardian, PRODUCTION_CONFIG } from '@torknetwork/guardian'; const guardian = new TorkGuardian({ ...PRODUCTION_CONFIG, apiKey: process.env.TORK_API_KEY!, });
const guardian = new TorkGuardian({ // Required apiKey: 'tork_...', // Optional baseUrl: 'https://www.tork.network', // API endpoint policy: 'standard', // 'strict' | 'standard' | 'minimal' redactPII: true, // Enable PII redaction // Shell command governance blockShellCommands: [ 'rm -rf', 'mkfs', 'dd if=', 'chmod 777', 'shutdown', 'reboot', ], // File access control allowedPaths: [], // Empty = allow all (except blocked) blockedPaths: [ '.env', '.env.local', '~/.ssh', '~/.aws', 'credentials.json', ], // Network governance networkPolicy: 'default', // 'default' | 'strict' | 'custom' allowedInboundPorts: [3000, 8080], // Ports skills may bind to allowedOutboundPorts: [443], // Ports for outbound connections allowedDomains: ['api.openai.com'], // If non-empty, only these domains are allowed blockedDomains: ['evil.com'], // Domains always blocked maxConnectionsPerMinute: 60, // Per-skill egress rate limit });
PolicyPIIShellFilesNetworkstrictDeny on detectionBlock allWhitelist onlyBlock allstandardRedactBlock dangerousBlock sensitiveAllowminimalRedactAllow allAllow allAllow all
import { redactPII, generateReceipt, governToolCall } from '@torknetwork/guardian'; // Redact PII from text const result = await redactPII('tork_...', 'Call 555-123-4567'); // Generate a compliance receipt const receipt = await generateReceipt('tork_...', 'Processed user data'); // Check a tool call against policy const decision = governToolCall( { name: 'file_write', args: { path: '.env' } }, { policy: 'standard', blockedPaths: ['.env'] } );
Scan any OpenClaw skill for vulnerabilities before installing it. The scanner checks for 14 security patterns across code and network categories.
# Scan a skill directory npx tork-scan ./my-skill # Full details for every finding npx tork-scan ./my-skill --verbose # JSON output for CI/CD npx tork-scan ./my-skill --json # Fail on any high or critical finding npx tork-scan ./my-skill --strict
import { SkillScanner, generateBadge } from '@torknetwork/guardian'; const scanner = new SkillScanner(); const report = await scanner.scanSkill('./my-skill'); console.log(`Risk: ${report.riskScore}/100`); console.log(`Verdict: ${report.verdict}`); // 'verified' | 'reviewed' | 'flagged' See docs/SCANNER.md for the full rule reference, severity weights, and example output.
Skills that pass the security scanner receive a Tork Verified badge: BadgeScoreMeaningTork Verified (green)0 - 29Safe to installTork Reviewed (yellow)30 - 49Manual review recommendedTork Flagged (red)50 - 100Security risks detected import { SkillScanner, generateBadge, generateBadgeMarkdown } from '@torknetwork/guardian'; const scanner = new SkillScanner(); const report = await scanner.scanSkill('./my-skill'); const badge = generateBadge(report); // Add to your README console.log(generateBadgeMarkdown(badge));
Sign up at tork.network to get your API key.
Agent frameworks, memory systems, reasoning layers, and model-native orchestration.
Largest current source with strong distribution and engagement signals.