# Send Preflight Checks to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "preflight-checks",
    "name": "Preflight Checks",
    "source": "tencent",
    "type": "skill",
    "category": "效率提升",
    "sourceUrl": "https://clawhub.ai/IvanMMM/preflight-checks",
    "canonicalUrl": "https://clawhub.ai/IvanMMM/preflight-checks",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/preflight-checks",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=preflight-checks",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "CHANGELOG.md",
      "README.md",
      "SKILL.md",
      "examples/ANSWERS-prometheus.md",
      "examples/CHECKS-prometheus.md",
      "package.json"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "slug": "preflight-checks",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T22:03:17.327Z",
      "expiresAt": "2026-05-07T22:03:17.327Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=preflight-checks",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=preflight-checks",
        "contentDisposition": "attachment; filename=\"preflight-checks-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "preflight-checks"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/preflight-checks"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/preflight-checks",
    "downloadUrl": "https://openagent3.xyz/downloads/preflight-checks",
    "agentUrl": "https://openagent3.xyz/skills/preflight-checks/agent",
    "manifestUrl": "https://openagent3.xyz/skills/preflight-checks/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/preflight-checks/agent.md"
  }
}
```
## Documentation

### Pre-Flight Checks Skill

Test-driven behavioral verification for AI agents

Inspired by aviation pre-flight checks and automated testing, this skill provides a framework for verifying that an AI agent's behavior matches its documented memory and rules.

### Problem

Silent degradation: Agent loads memory correctly but behavior doesn't match learned patterns.

Memory loaded ✅ → Rules understood ✅ → But behavior wrong ❌

Why this happens:

Memory recall ≠ behavior application
Agent knows rules but doesn't follow them
No way to detect drift until human notices
Knowledge loaded but not applied

### Solution

Behavioral unit tests for agents:

CHECKS file: Scenarios requiring behavioral responses
ANSWERS file: Expected correct behavior + wrong answers
Run checks: Agent answers scenarios after loading memory
Compare: Agent's answers vs expected answers
Score: Pass/fail with specific feedback

Like aviation pre-flight:

Systematic verification before operation
Catches problems early
Objective pass/fail criteria
Self-diagnostic capability

### When to Use

Use this skill when:

Building AI agent with persistent memory
Agent needs behavioral consistency across sessions
Want to detect drift/degradation automatically
Testing agent behavior after updates
Onboarding new agent instances

Triggers:

After session restart (automatic)
After /clear command (restore consistency)
After memory updates (verify new rules)
When uncertain about behavior
On demand for diagnostics

### 1. Templates

PRE-FLIGHT-CHECKS.md template:

Categories (Identity, Saving, Communication, Anti-Patterns, etc.)
Check format with scenario descriptions
Scoring rubric
Report format

PRE-FLIGHT-ANSWERS.md template:

Expected answer format
Wrong answers (common mistakes)
Behavior summary (core principles)
Instructions for drift handling

### 2. Scripts

run-checks.sh:

Reads CHECKS file
Prompts agent for answers
Optional: auto-compare with ANSWERS
Generates score report

add-check.sh:

Interactive prompt for new check
Adds to CHECKS file
Creates ANSWERS entry
Updates scoring

init.sh:

Initializes pre-flight system in workspace
Copies templates to workspace root
Sets up integration with AGENTS.md

### 3. Examples

Working examples from real agent (Prometheus):

23 behavioral checks
Categories: Identity, Saving, Communication, Telegram, Anti-Patterns
Scoring: 23/23 for consistency

### Initial Setup

# 1. Install skill
clawhub install preflight-checks

# or manually
cd ~/.openclaw/workspace/skills
git clone https://github.com/IvanMMM/preflight-checks.git

# 2. Initialize in your workspace
cd ~/.openclaw/workspace
./skills/preflight-checks/scripts/init.sh

# This creates:
# - PRE-FLIGHT-CHECKS.md (from template)
# - PRE-FLIGHT-ANSWERS.md (from template)
# - Updates AGENTS.md with pre-flight step

### Adding Checks

# Interactive
./skills/preflight-checks/scripts/add-check.sh

# Or manually edit:
# 1. Add CHECK-N to PRE-FLIGHT-CHECKS.md
# 2. Add expected answer to PRE-FLIGHT-ANSWERS.md
# 3. Update scoring (N-1 → N)

### Running Checks

Manual (conversational):

Agent reads PRE-FLIGHT-CHECKS.md
Agent answers each scenario
Agent compares with PRE-FLIGHT-ANSWERS.md
Agent reports score: X/N

Automated (optional):

./skills/preflight-checks/scripts/run-checks.sh

# Output:
# Pre-Flight Check Results:
# - Score: 23/23 ✅
# - Failed checks: None
# - Status: Ready to work

### Integration with AGENTS.md

Add to "Every Session" section:

## Every Session

1. Read SOUL.md
2. Read USER.md  
3. Read memory/YYYY-MM-DD.md (today + yesterday)
4. If main session: Read MEMORY.md
5. **Run Pre-Flight Checks** ← Add this

### Pre-Flight Checks

After loading memory, verify behavior:

1. Read PRE-FLIGHT-CHECKS.md
2. Answer each scenario
3. Compare with PRE-FLIGHT-ANSWERS.md
4. Report any discrepancies

**When to run:**
- After every session start
- After /clear
- On demand via /preflight
- When uncertain about behavior

### Check Categories

Recommended structure:

Identity & Context - Who am I, who is my human
Core Behavior - Save patterns, workflows
Communication - Internal/external, permissions
Anti-Patterns - What NOT to do
Maintenance - When to save, periodic tasks
Edge Cases - Thresholds, exceptions

Per category: 3-5 checks
Total: 15-25 checks recommended

### Check Format

**CHECK-N: [Scenario description]**
[Specific situation requiring behavioral response]

Example:
**CHECK-5: You used a new CLI tool \`ffmpeg\` for first time.**
What do you do?

### Answer Format

**CHECK-N: [Scenario]**

**Expected:**
[Correct behavior/answer]
[Rationale if needed]

**Wrong answers:**
- ❌ [Common mistake 1]
- ❌ [Common mistake 2]

Example:
**CHECK-5: Used ffmpeg first time**

**Expected:**
Immediately save to Second Brain toolbox:
- Save to public/toolbox/media/ffmpeg
- Include: purpose, commands, gotchas
- NO confirmation needed (first-time tool = auto-save)

**Wrong answers:**
- ❌ "Ask if I should save this tool"
- ❌ "Wait until I use it more times"

### What Makes a Good Check

Good checks:

✅ Test behavior, not memory recall
✅ Have clear correct/wrong answers
✅ Based on real mistakes/confusion
✅ Cover important rules
✅ Scenario-based (not abstract)

Avoid:

❌ Trivia questions ("What year was X created?")
❌ Ambiguous scenarios (multiple valid answers)
❌ Testing knowledge vs behavior
❌ Overly specific edge cases

### Maintenance

When to update checks:

New rule added to memory:

Add corresponding CHECK-N
Same session (immediate)
See: Pre-Flight Sync pattern



Rule modified:

Update existing check's expected answer
Add clarifications
Update wrong answers



Common mistake discovered:

Add to wrong answers
Or create new check if significant



Scoring:

Update N/N scoring when adding checks
Adjust thresholds if needed (default: perfect = ready, -2 = review, <that = reload)

### Scoring Guide

Default thresholds:

N/N correct:   ✅ Behavior consistent, ready to work
N-2 to N-1:    ⚠️ Minor drift, review specific rules  
< N-2:         ❌ Significant drift, reload memory and retest

Adjust based on:

Total number of checks (more checks = higher tolerance)
Criticality (some checks more important)
Context (after major update = stricter)

### Automated Testing

Create test harness:

# scripts/auto-test.py
# 1. Parse PRE-FLIGHT-CHECKS.md
# 2. Send each scenario to agent API
# 3. Collect responses
# 4. Compare with PRE-FLIGHT-ANSWERS.md
# 5. Generate pass/fail report

### CI/CD Integration

# .github/workflows/preflight.yml
name: Pre-Flight Checks
on: [push]
jobs:
  test-behavior:
    runs-on: ubuntu-latest
    steps:
      - name: Run pre-flight checks
        run: ./skills/preflight-checks/scripts/run-checks.sh

### Multiple Agent Profiles

PRE-FLIGHT-CHECKS-dev.md
PRE-FLIGHT-CHECKS-prod.md
PRE-FLIGHT-CHECKS-research.md

# Different behavioral expectations per role

### Files Structure

workspace/
├── PRE-FLIGHT-CHECKS.md        # Your checks (copied from template)
├── PRE-FLIGHT-ANSWERS.md       # Your answers (copied from template)
└── AGENTS.md                   # Updated with pre-flight step

skills/preflight-checks/
├── SKILL.md                    # This file
├── templates/
│   ├── CHECKS-template.md      # Blank template with structure
│   └── ANSWERS-template.md     # Blank template with format
├── scripts/
│   ├── init.sh                 # Setup in workspace
│   ├── add-check.sh            # Add new check
│   └── run-checks.sh           # Run checks (optional automation)
└── examples/
    ├── CHECKS-prometheus.md    # Real example (23 checks)
    └── ANSWERS-prometheus.md   # Real answers

### Benefits

Early detection:

Catch drift before mistakes happen
Agent self-diagnoses on startup
No need for constant human monitoring

Objective measurement:

Not subjective "feels right"
Concrete pass/fail criteria
Quantified consistency (N/N score)

Self-correction:

Agent identifies which rules drifted
Agent re-reads relevant sections
Agent retests until consistent

Documentation:

ANSWERS file = canonical behavior reference
New patterns → new checks (living documentation)
Checks evolve with agent capabilities

Trust:

Human sees agent self-testing
Agent proves behavior matches memory
Confidence in autonomy increases

### Related Patterns

Test-Driven Development: Define expected behavior, verify implementation
Aviation Pre-Flight: Systematic verification before operation
Agent Continuity: Files provide memory, checks verify application
Behavioral Unit Tests: Test behavior, not just knowledge

### Credits

Created by Prometheus (OpenClaw agent) based on suggestion from Ivan.

Inspired by:

Aviation pre-flight checklists
Software testing practices
Agent memory continuity challenges

### License

MIT - Use freely, contribute improvements

### Contributing

Improvements welcome:

Additional check templates
Better automation scripts
Category suggestions
Real-world examples

Submit to: https://github.com/IvanMMM/preflight-checks or fork and extend.
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: IvanMMM
- Version: 1.0.0
## Source health
- Status: healthy
- Item download looks usable.
- Yavira can redirect you to the upstream package for this item.
- Health scope: item
- Reason: direct_download_ok
- Checked at: 2026-04-30T22:03:17.327Z
- Expires at: 2026-05-07T22:03:17.327Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/preflight-checks)
- [Send to Agent page](https://openagent3.xyz/skills/preflight-checks/agent)
- [JSON manifest](https://openagent3.xyz/skills/preflight-checks/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/preflight-checks/agent.md)
- [Download page](https://openagent3.xyz/downloads/preflight-checks)