Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Comprehensive legacy system modernization from assessment and strategy to monolith decomposition and cloud migration for any tech stack and scale.
Comprehensive legacy system modernization from assessment and strategy to monolith decomposition and cloud migration for any tech stack and scale.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Complete methodology for assessing, planning, and executing legacy system modernization β from monolith decomposition to cloud migration. Works for any tech stack, any scale.
system_name: "[Name]" age_years: 0 primary_language: "" framework: "" database: "" deployment: "on-prem | VM | container | serverless" lines_of_code: 0 team_size: 0 monthly_users: 0 annual_revenue_supported: "$0" compliance_requirements: [] known_pain_points: [] business_driver: "cost | speed | talent | risk | compliance | scale" timeline_pressure: "low | medium | high | critical" budget_range: "$0-$0" sponsor: ""
Score each dimension 1-5 (1=critical, 5=healthy): DimensionScoreEvidenceCode quality β test coverage, complexity, duplicationArchitecture β coupling, modularity, clear boundariesInfrastructure β deployment automation, monitoring, scalingDependencies β outdated libraries, EOL frameworks, security vulnsData β schema quality, migration history, backup/recoveryDocumentation β accuracy, coverage, onboarding effectivenessOperations β deployment frequency, MTTR, incident rateSecurity β auth patterns, encryption, audit trail, compliance gapsDeveloper experience β build time, local setup, debugging toolsBusiness logic clarity β documented rules, test coverage of logic Total: /50 40-50: Healthy β incremental improvement 30-39: Aging β targeted modernization 20-29: Legacy β systematic modernization needed 10-19: Critical β modernize or replace
For each major dependency: dependency: "" current_version: "" latest_version: "" eol_date: "" # End of life security_vulns: 0 # Known CVEs upgrade_difficulty: "trivial | moderate | hard | rewrite" business_risk: "low | medium | high | critical" alternatives: [] Priority rules: EOL within 12 months β P0 Known unpatched CVEs β P0 3+ major versions behind β P1 No active maintainer β P1 Everything else β P2
StrategyWhen to UseRiskCostSpeedDisruptionRehost (lift & shift)Datacenter exit, minimal changeLowLowFastLowReplatform (lift & optimize)Cloud benefits without rewriteLow-MedMediumMediumLow-MedRefactor (restructure)Good code, bad architectureMediumMediumMediumMediumRe-architect (rebuild patterns)Monolithβservices, new patternsHighHighSlowHighRebuild (rewrite)Small system, clear requirementsVery HighVery HighVery SlowVery HighReplace (buy/SaaS)Commodity functionalityMediumVariableFastHighRetireNo longer neededNoneNegativeInstantLowRetain (do nothing)Working fine, other prioritiesNoneOngoingN/ANone
Is the system still needed? ββ No β RETIRE ββ Yes β Is it a commodity (CRM, email, etc.)? β ββ Yes β REPLACE (buy SaaS) β ββ No β Is the code maintainable? β ββ Yes β Is the architecture the problem? β β ββ Yes β RE-ARCHITECT (strangler fig) β β ββ No β Is the infrastructure the problem? β β ββ Yes β REPLATFORM β β ββ No β REFACTOR incrementally β ββ No β Is the system small (<50K LOC)? β ββ Yes β Can requirements be clearly defined? β β ββ Yes β REBUILD β β ββ No β REFACTOR + RE-ARCHITECT β ββ No β STRANGLER FIG (never big-bang rewrite)
NEVER do a full rewrite of a large system. It fails 70%+ of the time because: The old system keeps getting features β moving target Hidden business rules only exist in code β they get lost Timeline always 2-3x longer than estimated Two systems to maintain during transition Team burns out before completion Always use Strangler Fig instead. Replace piece by piece.
Identify a boundary β a feature, page, or API endpoint Build the replacement β new stack, new patterns Route traffic β proxy/facade sends requests to new or old Verify parity β same behavior, same data Cut over β remove the proxy, retire the old code Repeat β next boundary
facade_name: "[API Gateway / Reverse Proxy / BFF]" routing_rules: - path: "/api/users/*" target: "new-service" status: "migrated" migrated_date: "2025-01-15" - path: "/api/orders/*" target: "legacy" status: "planned" target_date: "2025-Q2" - path: "/api/reports/*" target: "legacy" status: "not-planned" notes: "Low priority, rarely used"
Start with the easiest, most isolated module β build confidence Then the highest-value business capability β prove ROI early Leave the hardest, most coupled parts for last β team learns patterns first Never migrate auth/identity early β it touches everything Migrate data access layer before business logic β clean data = clean migration Always keep the old system as fallback until new is proven
PatternWhenComplexityRiskDual writeBoth systems write simultaneouslyHighData inconsistencyCDC (Change Data Capture)Stream changes from oldβnew DBMediumLag, orderingETL batch syncPeriodic bulk syncLowStale dataEvent sourcing bridgeEvents from old, replay in newHighSchema mappingRead from new, write to oldTransition periodMediumRouting complexity Golden rule: Pick ONE source of truth. Never let both systems own the same data simultaneously.
Before splitting a monolith, identify bounded contexts: Event Storming (preferred) β sticky notes for domain events, commands, aggregates Code analysis β find clusters of related classes/tables Team analysis β which teams own which features? Data coupling analysis β which tables are joined together?
context_name: "" description: "" team: "" entities: [] commands: [] events_published: [] events_consumed: [] database_tables: [] external_integrations: [] coupling_score: 0 # 0=independent, 10=deeply coupled extraction_difficulty: "easy | moderate | hard | very-hard" business_value: "low | medium | high | critical"
Plot contexts on: Business Value (Y) Γ Extraction Difficulty (X) EasyModerateHardHigh valueπ’ Do firstπ‘ Do secondπ Plan carefullyMedium valueπ’ Quick winπ‘ Evaluate ROIπ΄ Probably not worth itLow valueπ‘ If easy, why notπ΄ Skipπ΄ Definitely skip
For each service being extracted: Bounded context clearly defined API contract designed (OpenAPI spec) Database separated (no shared tables) Authentication/authorization integrated Event publishing for cross-service communication Circuit breaker for calls back to monolith Monitoring and alerting configured Deployment pipeline independent Feature flag for traffic routing Rollback plan documented Performance baseline captured (before/after) Data migration script tested Integration tests with monolith passing Runbook for on-call written
StrategyDescriptionDowntimeRiskParallel runNew DB alongside old, sync bothZeroHigh complexityBlue-greenFull copy, switch DNSMinutesMediumRollingMigrate table by tableZero per tableMediumBig bangStop, migrate, startHoursHigh
Always additive β add columns/tables, never remove in the same release Two-phase removal β Release 1: stop writing. Release 2: drop column (after backfill verified) Default values always β every new column gets a default Backward compatible β old code must work with new schema during rollout Index concurrently β never lock production tables Test with production-scale data β 100 rows β 100M rows
Before migrating data: table: "" row_count_source: 0 row_count_target: 0 count_match: false checksum_match: false null_analysis: "pass | fail" referential_integrity: "pass | fail" business_rule_validation: "pass | fail" sample_manual_review: "pass | fail" performance_benchmark: "pass | fail" rollback_tested: false Rule: All gates must pass before cutover. No exceptions.
Score each workload: FactorScore (1-5)NotesStateless designConfiguration externalizedLogging to stdoutHealth check endpointGraceful shutdownHorizontal scalabilitySecret management12-factor compliance 35-40: Cloud-native ready 25-34: Minor modifications needed 15-24: Significant refactoring 8-14: Major redesign required
Network architecture designed (VPC, subnets, security groups) Identity and access management configured Data residency requirements verified Compliance mapping (cloud controls β requirements) Cost estimation completed (TCO comparison) Disaster recovery plan updated Monitoring and alerting migrated DNS and certificate management planned CDN configuration Load testing in cloud environment Security scanning pipeline Backup and restore verified Runbooks updated for cloud operations Team trained on cloud platform Vendor lock-in assessment
Right-size instances β start small, scale up with data Reserved/committed use β only after 3 months of usage data Spot/preemptible β for batch jobs, CI/CD, dev/test Auto-scaling β scale down at night, weekends Storage tiers β hot/warm/cold/archive based on access patterns Tag everything β cost allocation by team, service, environment Monthly review β unused resources, oversized instances
For legacy systems without APIs: Screen scraping adapter β parse HTML/mainframe screens Database tap β read directly from legacy DB (read-only!) File-based integration β watch folders, parse files Message queue bridge β legacy writes to queue, new reads RPC wrapper β expose existing functions via REST/gRPC
endpoint: "/api/v2/orders" legacy_source: "stored_procedure: sp_GetOrders" new_implementation: "orders-service" migration_status: "legacy | dual-run | new-only" contract_changes: - field: "order_date" old_format: "MM/DD/YYYY string" new_format: "ISO 8601" adapter: "date_format_adapter" - field: "status" old_values: ["A", "C", "P"] new_values: ["active", "completed", "pending"] adapter: "status_code_mapper" parity_tests: 47 parity_passing: 47
/ Smoke Tests \ β Whole system alive? / Parity Tests \ β Same behavior old vs new? / Integration Tests \ β Services work together? / Contract Tests \ β API contracts honored? / Performance Tests \ β Not slower than before? / Data Validation Tests \ β Data migrated correctly? / Unit Tests \ β New code works?
For EVERY migrated feature: feature: "" test_type: "api_parity | ui_parity | data_parity" method: "shadow traffic | replay | parallel run" sample_size: 0 match_rate: "0%" # Target: 99.9%+ mismatches_investigated: 0 mismatches_accepted: 0 # Known intentional differences mismatches_bugs: 0 sign_off: false Shadow traffic β copy production requests to new system, compare responses (don't serve new responses to users yet).
P95 latency must not increase >10% vs legacy Throughput must meet or exceed legacy under same load Database query count must not increase per request Memory usage must not increase >20% If ANY metric regresses β investigate before proceeding
RoleResponsibilityWhen NeededModernization LeadStrategy, sequencing, blockersFull-timeLegacy ExpertKnows where the bodies are buriedPart-time, on-callNew Platform EngineerBuilds target architectureFull-timeData EngineerMigration, sync, validationPhase-dependentQA/Test EngineerParity testing, automationFull-timeDevOps/PlatformCI/CD, infrastructurePart-timeProduct OwnerBusiness priority, acceptancePart-time
The most dangerous part of modernization is losing undocumented business rules. Code archaeology β git blame, find oldest unchanged code, understand why Interview stakeholders β "What would break if we changed X?" Production log analysis β what edge cases actually occur? Error handling review β each catch block is a documented business rule Test suite review β tests describe expected behavior Configuration review β magic numbers, feature flags, overrides
AudienceFrequencyContentExecutive sponsorBi-weeklyProgress, risks, budget, timelineEngineering teamWeeklySprint goals, technical decisions, blockersDependent teamsMonthlyUpcoming changes, migration dates, API changesEnd usersPer migrationWhat's changing, when, how it affects them
#RiskLikelihoodImpactMitigation1Undocumented business rules lostHighCriticalCode archaeology + stakeholder interviews + parity tests2Timeline underestimationVery HighHigh2x initial estimate, phase-gated checkpoints3Data migration corruptionMediumCriticalChecksums, parallel runs, rollback plans4Feature parity gapsHighHighShadow traffic testing, user acceptance testing5Team knowledge loss (people leave)MediumHighDocument everything, pair programming, knowledge sharing6Legacy system changes during migrationHighMediumFeature freeze or dual-write contract7Performance regressionMediumHighLoad testing at every phase, performance budgets8Scope creep (improve while migrating)Very HighMediumStrict "migrate, don't improve" rule for Phase 19Integration failuresMediumHighContract testing, circuit breakers, fallback routing10Stakeholder fatigueHighMediumQuick wins early, visible progress dashboard
Stop the modernization if: Budget exceeds 2x initial estimate with <50% complete Key business rules can't be verified after migration Team attrition >30% during project Legacy system stability degrades due to migration work Business context changes (M&A, pivot, sunset) If kill criteria triggered: Stabilize what's done, document learnings, reassess in 6 months.
Java β Modern Java (8β17+) Records, sealed classes, pattern matching Virtual threads (Project Loom) for thread-per-request Migrate build: MavenβGradle or update Maven plugins Spring Boot 2β3: javaxβjakarta namespace Python 2β3 Use 2to3 tool for automated conversion Fix: print(), division, unicode, dict methods Upgrade dependencies (check py3 compat) jQueryβReact/Vue Extract components from page sections State management replaces DOM manipulation Event handlers become component methods Ajax calls become API service layer MonolithβMicroservices Strangler fig (see Phase 3) Start with read models (reporting, search) Extract stateless services first Shared database β database-per-service last On-PremβCloud Rehost first (lift & shift) Then replatform (managed services) Then re-architect (cloud-native patterns) Never skip steps β each proves value
API wrapping β expose CICS/IMS transactions as REST APIs Screen scraping β automate 3270 terminal interactions Gradual extraction β one transaction at a time Data replication β DB2/VSAM β PostgreSQL/cloud DB Rule extraction β COBOL paragraphs β business rule engine Never rewrite all at once β decades of business logic = decades of edge cases
Anti-PatternSymptomFixDistributed monolithServices must deploy togetherIdentify and break couplingShared databaseMultiple services write same tablesDatabase-per-serviceSynchronous chainsA calls B calls C calls DAsync events, choreographyNano-servicesHundreds of tiny servicesMerge related servicesShared libraries for business logicLibrary update breaks consumersDuplicate code > shared couplingNo API versioningBreaking changes cascadeSemantic versioning, deprecation policy
project: "" assessment_date: "" overall_health: "green | yellow | red" progress: modules_total: 0 modules_migrated: 0 modules_in_progress: 0 percent_complete: "0%" velocity: modules_per_sprint: 0 estimated_completion: "" on_track: true quality: parity_test_pass_rate: "0%" production_incidents_from_migration: 0 rollbacks: 0 risk: open_risks: 0 p0_risks: 0 blocked_items: 0 cost: budget_total: "$0" budget_spent: "$0" budget_remaining: "$0" burn_rate_monthly: "$0"
DimensionWeightScore (0-10)WeightedStrategy clarity15%Risk management15%Testing rigor15%Data integrity15%Architecture quality10%Team capability10%Stakeholder alignment10%Documentation10%Total100%/100 90-100: Exemplary β reference project 70-89: Strong β minor improvements 50-69: Adequate β address gaps Below 50: At risk β pause and reassess
Strangler fig β modernize around new features Feature freeze on legacy module ONLY when that module is being migrated New features build in new stack from day 1
Start with observability: instrument logging, tracing, metrics Run for 2-4 weeks to understand actual usage patterns Code coverage analysis shows what code is actually executed Interview longest-tenured team members
Sequence by dependency order β downstream first Shared services (auth, data) get modernized once, reused Never parallelize more than 2 modernization streams
Treat code as the documentation Invest 2-4 weeks in code archaeology before any migration work Pair new developers with business stakeholders Write tests for existing behavior before changing anything
Map overlapping functionality first Pick "winner" system per domain β don't merge codebases API integration layer between systems 18-month realistic timeline for full consolidation
Maintain compliance evidence chain during migration Dual-audit period with both systems Get compliance team involved in migration planning from Day 1 Document control mapping: old control β new implementation
CommandAction"Assess this system for modernization"Run full Technical Debt Inventory"Which modernization strategy should we use?"Walk through Strategy Decision Tree"Plan a strangler fig migration"Generate Strangler Facade YAML + sequence"Decompose this monolith"Domain discovery + Bounded Context mapping"Migrate this database"Data Quality Gates + migration strategy"Check cloud readiness"Run Cloud Readiness Assessment"Create a migration testing plan"Build Testing Pyramid with parity tests"What are the risks?"Generate Top 10 risk register"How do we migrate from [X] to [Y]?"Pattern-specific playbook"Status update for modernization"Generate Weekly Status Template"Score this modernization project"Run 100-Point Quality Rubric"Should we kill this modernization?"Evaluate Kill Criteria
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.