Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Manage n8n workflows and automations via API. Use when working with n8n workflows, executions, or automation tasks - listing workflows, activating/deactivating, checking execution status, manually triggering workflows, or debugging automation issues.
Manage n8n workflows and automations via API. Use when working with n8n workflows, executions, or automation tasks - listing workflows, activating/deactivating, checking execution status, manually triggering workflows, or debugging automation issues.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Comprehensive workflow automation management for n8n platform with creation, testing, execution monitoring, and performance optimization capabilities.
When creating n8n workflows, ALWAYS: โ Generate COMPLETE workflows with all functional nodes โ Include actual HTTP Request nodes for API calls (ImageFX, Gemini, Veo, Suno, etc.) โ Add Code nodes for data transformation and logic โ Create proper connections between all nodes โ Use real node types (n8n-nodes-base.httpRequest, n8n-nodes-base.code, n8n-nodes-base.set) NEVER: โ Create "Setup Instructions" placeholder nodes โ Generate workflows with only TODO comments โ Make incomplete workflows requiring manual node addition โ Use text-only nodes as substitutes for real functionality Example GOOD workflow: Manual Trigger โ Set Config โ HTTP Request (API call) โ Code (parse) โ Response Example BAD workflow: Manual Trigger โ Code ("Add HTTP nodes here, configure APIs...") Always build the complete, functional workflow with all necessary nodes configured and connected.
Required environment variables: N8N_API_KEY โ Your n8n API key (Settings โ API in the n8n UI) N8N_BASE_URL โ Your n8n instance URL Configure credentials via OpenClaw settings: Add to ~/.config/openclaw/settings.json: { "skills": { "n8n": { "env": { "N8N_API_KEY": "your-api-key-here", "N8N_BASE_URL": "your-n8n-url-here" } } } } Or set per-session (do not persist secrets in shell rc files): export N8N_API_KEY="your-api-key-here" export N8N_BASE_URL="your-n8n-url-here" Verify connection: python3 scripts/n8n_api.py list-workflows --pretty Security note: Never store API keys in plaintext shell config files (~/.bashrc, ~/.zshrc). Use the OpenClaw settings file or a secure secret manager.
List Workflows python3 scripts/n8n_api.py list-workflows --pretty python3 scripts/n8n_api.py list-workflows --active true --pretty Get Workflow Details python3 scripts/n8n_api.py get-workflow --id <workflow-id> --pretty Create Workflows # From JSON file python3 scripts/n8n_api.py create --from-file workflow.json Activate/Deactivate python3 scripts/n8n_api.py activate --id <workflow-id> python3 scripts/n8n_api.py deactivate --id <workflow-id>
Validate Workflow Structure # Validate existing workflow python3 scripts/n8n_tester.py validate --id <workflow-id> # Validate from file python3 scripts/n8n_tester.py validate --file workflow.json --pretty # Generate validation report python3 scripts/n8n_tester.py report --id <workflow-id> Dry Run Testing # Test with data python3 scripts/n8n_tester.py dry-run --id <workflow-id> --data '{"email": "test@example.com"}' # Test with data file python3 scripts/n8n_tester.py dry-run --id <workflow-id> --data-file test-data.json # Full test report (validation + dry run) python3 scripts/n8n_tester.py dry-run --id <workflow-id> --data-file test.json --report Test Suite # Run multiple test cases python3 scripts/n8n_tester.py test-suite --id <workflow-id> --test-suite test-cases.json
List Executions # Recent executions (all workflows) python3 scripts/n8n_api.py list-executions --limit 10 --pretty # Specific workflow executions python3 scripts/n8n_api.py list-executions --id <workflow-id> --limit 20 --pretty Get Execution Details python3 scripts/n8n_api.py get-execution --id <execution-id> --pretty Manual Execution # Trigger workflow python3 scripts/n8n_api.py execute --id <workflow-id> # Execute with data python3 scripts/n8n_api.py execute --id <workflow-id> --data '{"key": "value"}'
Analyze Performance # Full performance analysis python3 scripts/n8n_optimizer.py analyze --id <workflow-id> --pretty # Analyze specific period python3 scripts/n8n_optimizer.py analyze --id <workflow-id> --days 30 --pretty Get Optimization Suggestions # Priority-ranked suggestions python3 scripts/n8n_optimizer.py suggest --id <workflow-id> --pretty Generate Optimization Report # Human-readable report with metrics, bottlenecks, and suggestions python3 scripts/n8n_optimizer.py report --id <workflow-id> Get Workflow Statistics # Execution statistics python3 scripts/n8n_api.py stats --id <workflow-id> --days 7 --pretty
from scripts.n8n_api import N8nClient client = N8nClient() # List workflows workflows = client.list_workflows(active=True) # Get workflow workflow = client.get_workflow('workflow-id') # Create workflow new_workflow = client.create_workflow({ 'name': 'My Workflow', 'nodes': [...], 'connections': {...} }) # Activate/deactivate client.activate_workflow('workflow-id') client.deactivate_workflow('workflow-id') # Executions executions = client.list_executions(workflow_id='workflow-id', limit=10) execution = client.get_execution('execution-id') # Execute workflow result = client.execute_workflow('workflow-id', data={'key': 'value'})
from scripts.n8n_api import N8nClient from scripts.n8n_tester import WorkflowTester client = N8nClient() tester = WorkflowTester(client) # Validate workflow validation = tester.validate_workflow(workflow_id='123') print(f"Valid: {validation['valid']}") print(f"Errors: {validation['errors']}") print(f"Warnings: {validation['warnings']}") # Dry run result = tester.dry_run( workflow_id='123', test_data={'email': 'test@example.com'} ) print(f"Status: {result['status']}") # Test suite test_cases = [ {'name': 'Test 1', 'input': {...}, 'expected': {...}}, {'name': 'Test 2', 'input': {...}, 'expected': {...}} ] results = tester.test_suite('123', test_cases) print(f"Passed: {results['passed']}/{results['total_tests']}") # Generate report report = tester.generate_test_report(validation, result) print(report)
from scripts.n8n_optimizer import WorkflowOptimizer optimizer = WorkflowOptimizer() # Analyze performance analysis = optimizer.analyze_performance('workflow-id', days=7) print(f"Performance Score: {analysis['performance_score']}/100") print(f"Health: {analysis['execution_metrics']['health']}") # Get suggestions suggestions = optimizer.suggest_optimizations('workflow-id') print(f"Priority Actions: {len(suggestions['priority_actions'])}") print(f"Quick Wins: {len(suggestions['quick_wins'])}") # Generate report report = optimizer.generate_optimization_report(analysis) print(report)
# Validate workflow structure python3 scripts/n8n_tester.py validate --id <workflow-id> --pretty # Test with sample data python3 scripts/n8n_tester.py dry-run --id <workflow-id> \ --data '{"email": "test@example.com", "name": "Test User"}' # If tests pass, activate python3 scripts/n8n_api.py activate --id <workflow-id>
# Check recent executions python3 scripts/n8n_api.py list-executions --id <workflow-id> --limit 10 --pretty # Get specific execution details python3 scripts/n8n_api.py get-execution --id <execution-id> --pretty # Validate workflow structure python3 scripts/n8n_tester.py validate --id <workflow-id> # Generate test report python3 scripts/n8n_tester.py report --id <workflow-id> # Check for optimization issues python3 scripts/n8n_optimizer.py report --id <workflow-id>
# Analyze current performance python3 scripts/n8n_optimizer.py analyze --id <workflow-id> --days 30 --pretty # Get actionable suggestions python3 scripts/n8n_optimizer.py suggest --id <workflow-id> --pretty # Generate comprehensive report python3 scripts/n8n_optimizer.py report --id <workflow-id> # Review execution statistics python3 scripts/n8n_api.py stats --id <workflow-id> --days 30 --pretty # Test optimizations with dry run python3 scripts/n8n_tester.py dry-run --id <workflow-id> --data-file test-data.json
# Check active workflows python3 scripts/n8n_api.py list-workflows --active true --pretty # Review recent execution status python3 scripts/n8n_api.py list-executions --limit 20 --pretty # Get statistics for each critical workflow python3 scripts/n8n_api.py stats --id <workflow-id> --pretty # Generate health reports python3 scripts/n8n_optimizer.py report --id <workflow-id>
The testing module performs comprehensive validation:
โ Required fields present (nodes, connections) โ All nodes have names and types โ Connection targets exist โ No disconnected nodes (warning)
โ Nodes requiring credentials are configured โ Required parameters are set โ HTTP nodes have URLs โ Webhook nodes have paths โ Email nodes have content
โ Workflow has trigger nodes โ Proper execution flow โ No circular dependencies โ End nodes identified
The optimizer analyzes multiple dimensions:
Total executions Success/failure rates Health status (excellent/good/fair/poor) Error patterns
Node count and complexity Connection patterns Expensive operations (API calls, database queries) Parallel execution opportunities
Sequential expensive operations High failure rates Missing error handling Rate limit issues
Parallel Execution: Identify nodes that can run concurrently Caching: Suggest caching for repeated API calls Batch Processing: Recommend batching for large datasets Error Handling: Add error recovery mechanisms Complexity Reduction: Split complex workflows Timeout Settings: Configure execution limits
Workflows receive a performance score (0-100) based on: Success Rate: Higher is better (50% weight) Complexity: Lower is better (30% weight) Bottlenecks: Fewer is better (critical: -20, high: -10, medium: -5) Optimizations: Implemented best practices (+5 each) Score interpretation: 90-100: Excellent - Well-optimized 70-89: Good - Minor improvements possible 50-69: Fair - Optimization recommended 0-49: Poor - Significant issues
Plan Structure: Design workflow nodes and connections before building Validate First: Always validate before deployment Test Thoroughly: Use dry-run with multiple test cases Error Handling: Add error nodes for reliability Documentation: Comment complex logic in Code nodes
Sample Data: Create realistic test data files Edge Cases: Test boundary conditions and errors Incremental: Test each node addition Regression: Retest after changes Production-like: Use staging environment that mirrors production
Inactive First: Deploy workflows in inactive state Gradual Rollout: Test with limited traffic initially Monitor Closely: Watch first executions carefully Quick Rollback: Be ready to deactivate if issues arise Document Changes: Keep changelog of modifications
Baseline Metrics: Capture performance before changes One Change at a Time: Isolate optimization impacts Measure Results: Compare before/after metrics Regular Reviews: Schedule monthly optimization reviews Cost Awareness: Monitor API usage and execution costs
Health Checks: Weekly execution statistics review Error Analysis: Investigate failure patterns Performance Monitoring: Track execution times Credential Rotation: Update credentials regularly Cleanup: Archive or delete unused workflows
Error: N8N_API_KEY not found in environment Solution: Set environment variable: export N8N_API_KEY="your-api-key"
Error: HTTP 401: Unauthorized Solution: Verify API key is correct Check N8N_BASE_URL is set correctly Confirm API access is enabled in n8n
Validation failed: Node missing 'name' field Solution: Check workflow JSON structure, ensure all required fields present
Status: timeout - Execution did not complete Solution: Check workflow for infinite loops Reduce dataset size for testing Optimize expensive operations Set execution timeout in workflow settings
Error: HTTP 429: Too Many Requests Solution: Add Wait nodes between API calls Implement exponential backoff Use batch processing Check API rate limits
Warning: Node 'HTTP_Request' may require credentials Solution: Configure credentials in n8n UI Assign credentials to node Test connection before activating
~/clawd/skills/n8n/ โโโ SKILL.md # This file โโโ scripts/ โ โโโ n8n_api.py # Core API client (extended) โ โโโ n8n_tester.py # Testing & validation โ โโโ n8n_optimizer.py # Performance optimization โโโ references/ โโโ api.md # n8n API reference
For detailed n8n REST API documentation, see references/api.md or visit: https://docs.n8n.io/api/
Documentation: n8n Official Docs: https://docs.n8n.io n8n Community Forum: https://community.n8n.io n8n API Reference: https://docs.n8n.io/api/ Debugging: Use validation: python3 scripts/n8n_tester.py validate --id <workflow-id> Check execution logs: python3 scripts/n8n_api.py get-execution --id <execution-id> Review optimization report: python3 scripts/n8n_optimizer.py report --id <workflow-id> Test with dry-run: python3 scripts/n8n_tester.py dry-run --id <workflow-id> --data-file test.json
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.