# Send Crisis Detector to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "crisis-detector",
    "name": "Crisis Detector",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/raghulpasupathi/crisis-detector",
    "canonicalUrl": "https://clawhub.ai/raghulpasupathi/crisis-detector",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/crisis-detector",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=crisis-detector",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "SKILL.md"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "slug": "crisis-detector",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T22:21:42.764Z",
      "expiresAt": "2026-05-07T22:21:42.764Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=crisis-detector",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=crisis-detector",
        "contentDisposition": "attachment; filename=\"crisis-detector-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "crisis-detector"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/crisis-detector"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/crisis-detector",
    "downloadUrl": "https://openagent3.xyz/downloads/crisis-detector",
    "agentUrl": "https://openagent3.xyz/skills/crisis-detector/agent",
    "manifestUrl": "https://openagent3.xyz/skills/crisis-detector/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/crisis-detector/agent.md"
  }
}
```
## Documentation

### Metadata

ID: crisis-detector
Version: 1.0.0
Category: safety
Priority: Critical
Installation: npm
Package: @raghulpasupathi/crisis-detector

### Description

Life-saving crisis detection system for identifying self-harm, suicide ideation, and mental health emergencies. Uses NLP, sentiment analysis, and behavioral patterns to detect users in crisis and connect them with immediate help resources.

### Features

Suicide Ideation Detection: Identify expressions of suicidal thoughts
Self-harm Detection: Detect mentions of self-injury or harm
Crisis Severity Scoring: Assess urgency level (low, medium, high, critical)
Intent Classification: Distinguish between ideation, planning, and imminent risk
Historical Analysis: Track user patterns over time
Resource Connection: Automatic crisis resource provision
Emergency Contact: Alert designated contacts or authorities
Real-time Monitoring: Continuous monitoring of at-risk users
Multi-language Support: 20+ languages for global coverage
Empathetic Response: Compassionate automated messaging

### Via ClawHub

https://clawhub.ai/raghulpasupathi/crisis-detector

### Via npm

npm install @raghulpasupathi/crisis-detector

### Configuration

{
  "enabled": true,
  "settings": {
    "detectionMode": "sensitive",
    "modes": {
      "sensitive": {
        "ideation": 0.40,
        "planning": 0.30,
        "imminent": 0.20,
        "selfHarm": 0.35,
        "actionThreshold": 0.40
      },
      "moderate": {
        "ideation": 0.60,
        "planning": 0.50,
        "imminent": 0.40,
        "selfHarm": 0.55,
        "actionThreshold": 0.60
      },
      "conservative": {
        "ideation": 0.75,
        "planning": 0.65,
        "imminent": 0.55,
        "selfHarm": 0.70,
        "actionThreshold": 0.75
      }
    },
    "detection": {
      "textAnalysis": {
        "enabled": true,
        "contextAware": true,
        "historicalContext": true
      },
      "behaviorAnalysis": {
        "enabled": true,
        "trackPatterns": true,
        "abnormalityDetection": true
      },
      "sentimentAnalysis": {
        "enabled": true,
        "depressionIndicators": true,
        "hopelessnessDetection": true
      }
    },
    "resources": {
      "crisisHotlines": {
        "enabled": true,
        "international": true,
        "textServices": true,
        "chatServices": true
      },
      "mentalHealthResources": {
        "enabled": true,
        "therapistDirectory": true,
        "selfHelpResources": true,
        "supportGroups": true
      },
      "emergencyServices": {
        "enabled": true,
        "localEmergency": true,
        "wellnessCheck": false
      }
    },
    "response": {
      "automaticMessage": true,
      "messageTemplate": "caring",
      "resourceDisplay": "immediate",
      "followUp": true,
      "humanOutreach": true
    },
    "actions": {
      "onIdeation": [
        "show_resources",
        "send_caring_message",
        "notify_safety_team",
        "enable_monitoring"
      ],
      "onPlanning": [
        "show_resources_urgent",
        "immediate_outreach",
        "notify_emergency_contacts",
        "enable_intensive_monitoring"
      ],
      "onImminent": [
        "emergency_intervention",
        "contact_authorities",
        "notify_emergency_contacts",
        "continuous_monitoring"
      ]
    },
    "privacy": {
      "respectUserPrivacy": true,
      "informedConsent": true,
      "dataMinimization": true,
      "confidentialLogging": true
    },
    "languages": ["en", "es", "fr", "de", "pt", "it", "ja", "ko", "zh", "ar", "hi", "ru"]
  }
}

### API / Methods

const CrisisDetector = require('@raghulpasupathi/crisis-detector');

// Initialize detector
const detector = new CrisisDetector({
  detectionMode: 'sensitive',
  enableResources: true
});

// Analyze text for crisis signals
const result = await detector.analyze('I don\\'t want to be here anymore...');
console.log(result);
/* Output:
{
  isCrisis: true,
  severity: 'high',
  urgency: 'immediate',
  confidence: 0.87,
  categories: {
    suicideIdeation: 0.89,
    selfHarm: 0.12,
    depression: 0.78,
    hopelessness: 0.82
  },
  intent: {
    type: 'ideation',
    planning: false,
    imminent: false,
    meansIdentified: false
  },
  riskLevel: 'high',
  indicators: [
    { type: 'suicide_ideation', phrase: "don't want to be here", confidence: 0.91 },
    { type: 'hopelessness', phrase: "anymore", confidence: 0.75 }
  ],
  sentiment: {
    overall: -0.85,
    depression: 0.82,
    anxiety: 0.45,
    hopelessness: 0.88
  },
  recommendedAction: 'immediate_intervention',
  resources: {
    crisisHotlines: [
      {
        name: 'National Suicide Prevention Lifeline',
        phone: '988',
        text: 'Text HOME to 741741',
        chat: 'https://suicidepreventionlifeline.org/chat/',
        available: '24/7'
      },
      {
        name: 'Crisis Text Line',
        text: 'Text HELLO to 741741',
        available: '24/7'
      }
    ],
    emergencyServices: {
      call: '911',
      text: 'Text 911 (where available)'
    }
  },
  suggestedMessage: "I'm concerned about you. You're not alone, and there are people who can help. Please reach out to the National Suicide Prevention Lifeline at 988 - they're available 24/7 and want to support you.",
  timestamp: '2026-02-20T10:30:00Z'
}
*/

// Quick crisis check
const isCrisis = await detector.isCrisis('Some text to check');

// Assess crisis severity
const severity = await detector.assessSeverity('Text expressing distress');
console.log(severity);
/* Output:
{
  level: 'high',
  score: 0.84,
  urgency: 'immediate',
  riskFactors: [
    'suicide_ideation',
    'hopelessness',
    'social_isolation'
  ],
  protectiveFactors: [
    'help_seeking'
  ]
}
*/

// Detect specific crisis types
const suicideRisk = await detector.detectSuicideRisk('Text to analyze');
const selfHarmRisk = await detector.detectSelfHarmRisk('Text to analyze');

// Analyze user behavior patterns
const behaviorAnalysis = await detector.analyzeUserBehavior(userId, {
  recentMessages: messages,
  activityChanges: activityData,
  timeRange: '30d'
});
console.log(behaviorAnalysis);
/* Output:
{
  concernLevel: 'elevated',
  patterns: [
    {
      type: 'social_withdrawal',
      detected: true,
      confidence: 0.78,
      description: 'Decreased interaction frequency by 65%'
    },
    {
      type: 'negative_content_increase',
      detected: true,
      confidence: 0.82,
      description: 'Increased negative sentiment in 78% of recent posts'
    },
    {
      type: 'activity_time_change',
      detected: true,
      confidence: 0.71,
      description: 'Shift to late-night activity (12am-4am)'
    }
  ],
  riskLevel: 'moderate-high',
  recommendation: 'enhanced_monitoring'
}
*/

// Get crisis resources for location
const resources = await detector.getResources({
  country: 'US',
  state: 'CA',
  language: 'en',
  services: ['crisis_hotline', 'text_line', 'chat_support']
});

// Generate empathetic response
const response = await detector.generateResponse({
  severity: 'high',
  intent: 'ideation',
  includeResources: true,
  tone: 'caring'
});

// Track at-risk user
await detector.trackUser(userId, {
  riskLevel: 'high',
  monitoringIntensity: 'enhanced',
  alertContacts: true
});

// Alert safety team
await detector.alertSafetyTeam({
  userId: userId,
  severity: 'critical',
  analysis: analysisResult,
  requiresImmediate: true
});

// Send caring outreach
await detector.sendOutreach(userId, {
  type: 'caring_message',
  includeResources: true,
  fromHuman: true
});

// Check user status
const status = await detector.getUserStatus(userId);
console.log(status);
/* Output:
{
  userId: 'user-123',
  currentRiskLevel: 'moderate',
  monitoringStatus: 'enhanced',
  lastCrisisDetection: '2026-02-18T14:30:00Z',
  outreachAttempts: 2,
  resourcesProvided: true,
  emergencyContactNotified: false,
  trend: 'stable'
}
*/

// Event listeners
detector.on('crisisDetected', async (crisis) => {
  console.warn('⚠️ CRISIS DETECTED:', crisis);

  // Immediate response
  await detector.sendOutreach(crisis.userId, {
    severity: crisis.severity,
    resources: crisis.resources
  });

  // Notify safety team
  await detector.alertSafetyTeam(crisis);
});

detector.on('imminentRisk', async (risk) => {
  console.error('🚨 IMMINENT RISK DETECTED');

  // Emergency intervention
  await detector.executeEmergencyProtocol(risk);

  // Consider wellness check
  if (risk.severity === 'critical') {
    await detector.considerWellnessCheck(risk);
  }
});

detector.on('improvementDetected', (update) => {
  console.log('✓ User showing improvement:', update);

  // Continue support
  await detector.sendEncouragement(update.userId);
});

// Performance stats
const stats = detector.getStats();
console.log(stats);
/* Output:
{
  totalAnalyses: 100000,
  crisisDetected: 1250,
  bySeverity: {
    low: 400,
    moderate: 550,
    high: 250,
    critical: 50
  },
  interventions: 1250,
  resourcesProvided: 1250,
  emergencyContacts: 75,
  positiveOutcomes: 980,
  averageResponseTime: '45s'
}
*/

### Dependencies

transformers.js: ^2.6.0 - Sentiment and intent analysis
natural: ^6.0.0 - NLP processing
compromise: ^14.0.0 - Text understanding
sentiment: ^5.0.0 - Sentiment analysis
franc-min: ^6.0.0 - Language detection

### Performance

Analysis Speed: 30-80ms per text
Accuracy:

Suicide ideation: 91% detection rate
Self-harm: 88% detection rate
Crisis severity: 87% accuracy
False positive rate: 8-12% (intentionally higher for safety)


Response Time: <1 minute for outreach

### Crisis Hotlines (Global)

USA: 988 Suicide & Crisis Lifeline
UK: 116 123 (Samaritans)
Canada: 1-833-456-4566
Australia: 13 11 14 (Lifeline)
International: befrienders.org

### Use Cases

Social media platforms
Mental health apps
Gaming communities
Dating applications
Student portals
Employee wellness platforms
Support forums
Chat applications
Any platform with user communication

### High False Positive Rate

Problem: Too many non-crisis messages flagged
Solution:

Switch to 'moderate' or 'conservative' mode
Enable more context analysis
Check for sarcasm/humor detection
Review historical user behavior
Adjust thresholds upward
Better to err on side of caution - false positives are acceptable

### Missing Crisis Signals

Problem: Not detecting users in genuine crisis
Solution:

Switch to 'sensitive' mode
Lower detection thresholds
Enable behavioral analysis
Check language support
Review flagged phrases database
Report false negatives for model improvement
This is more serious than false positives

### User Privacy Concerns

Problem: Users concerned about monitoring
Solution:

Clear privacy policy disclosure
Explain life-saving purpose
Offer opt-in for enhanced monitoring
Data minimization practices
Confidential handling of sensitive data
Balance privacy with safety

### Resource Availability

Problem: Crisis resources not available in user's location
Solution:

Expand international hotline database
Include online chat/text services
Provide self-help resources
Connect to community support
Partner with local organizations
Always provide emergency services number

### Response Effectiveness

Problem: Unsure if interventions are helping
Solution:

Track user engagement with resources
Monitor user behavior after intervention
Follow-up outreach after 24-48 hours
Collect feedback when appropriate
Partner with mental health professionals
Continuous improvement of messaging

### Integration Example

// Complete platform integration
const express = require('express');
const CrisisDetector = require('@raghulpasupathi/crisis-detector');

const app = express();
const detector = new CrisisDetector({ detectionMode: 'sensitive' });

// At-risk user tracking
const atRiskUsers = new Map();

// Monitor all user-generated content
app.post('/api/posts/create', async (req, res) => {
  try {
    const { userId, content } = req.body;

    // Analyze for crisis signals
    const analysis = await detector.analyze(content);

    if (analysis.isCrisis) {
      console.warn(\`⚠️ Crisis detected for user ${userId}\`);

      // Log for safety team (confidential)
      await logCrisisEvent(userId, analysis);

      // Immediate response based on severity
      if (analysis.severity === 'critical' || analysis.intent.imminent) {
        // CRITICAL: Imminent risk
        console.error(\`🚨 IMMINENT RISK: User ${userId}\`);

        // Show resources immediately
        await showEmergencyResources(userId, analysis.resources);

        // Alert safety team for immediate outreach
        await alertSafetyTeam({
          userId: userId,
          severity: 'critical',
          analysis: analysis,
          urgent: true
        });

        // Consider emergency services
        if (analysis.riskLevel === 'critical') {
          await considerEmergencyServices(userId, analysis);
        }

        // Track intensively
        await detector.trackUser(userId, {
          riskLevel: 'critical',
          monitoringIntensity: 'maximum'
        });
      } else if (analysis.severity === 'high') {
        // High risk: Immediate support
        await sendCaringMessage(userId, {
          message: analysis.suggestedMessage,
          resources: analysis.resources
        });

        await alertSafetyTeam({
          userId: userId,
          severity: 'high',
          analysis: analysis,
          urgent: false
        });

        await detector.trackUser(userId, {
          riskLevel: 'high',
          monitoringIntensity: 'enhanced'
        });
      } else {
        // Moderate risk: Show resources
        await showSupportResources(userId, analysis.resources);

        await detector.trackUser(userId, {
          riskLevel: 'moderate',
          monitoringIntensity: 'standard'
        });
      }

      // Update tracking
      atRiskUsers.set(userId, {
        lastDetection: new Date(),
        severity: analysis.severity,
        analysis: analysis
      });
    }

    // Allow post (don't censor crisis content)
    const post = await createPost(userId, content);

    res.json({
      success: true,
      postId: post.id
    });
  } catch (error) {
    res.status(500).json({ error: error.message });
  }
});

// Background monitoring of at-risk users
setInterval(async () => {
  for (const [userId, tracking] of atRiskUsers) {
    // Check recent activity
    const behavior = await detector.analyzeUserBehavior(userId, {
      timeRange: '24h'
    });

    if (behavior.concernLevel === 'elevated') {
      // Send follow-up
      await sendFollowUp(userId, {
        type: 'check_in',
        resources: true
      });
    } else if (behavior.concernLevel === 'improved') {
      // Positive trend
      await sendEncouragement(userId);

      // Reduce monitoring intensity
      await detector.trackUser(userId, {
        riskLevel: 'low',
        monitoringIntensity: 'minimal'
      });
    }
  }
}, 6 * 60 * 60 * 1000); // Every 6 hours

// User dashboard - show resources
app.get('/api/mental-health/resources', async (req, res) => {
  const userId = req.user.id;

  const resources = await detector.getResources({
    country: req.user.country,
    language: req.user.language
  });

  res.json({
    success: true,
    resources: resources,
    message: 'You\\'re not alone. Help is available 24/7.'
  });
});

// Safety team dashboard
app.get('/admin/safety/crisis-monitor', requireSafetyTeam, async (req, res) => {
  const activeAlerts = await getActiveCrisisAlerts();

  // Add current status for each
  const enriched = await Promise.all(
    activeAlerts.map(async alert => {
      const status = await detector.getUserStatus(alert.userId);
      return { ...alert, status };
    })
  );

  res.json({
    success: true,
    alerts: enriched,
    count: enriched.length
  });
});

// Helper functions
async function sendCaringMessage(userId, { message, resources }) {
  await sendNotification(userId, {
    title: 'We\\'re here for you',
    body: message,
    action: {
      text: 'Get help now',
      url: '/mental-health/resources'
    },
    priority: 'high'
  });

  // Log outreach attempt
  await logOutreach(userId, 'caring_message');
}

async function considerEmergencyServices(userId, analysis) {
  // This is a sensitive decision requiring human judgment
  await alertSafetyTeam({
    userId: userId,
    message: 'CRITICAL: Consider wellness check',
    analysis: analysis,
    requiresDecision: true
  });
}

### Best Practices (CRITICAL)

Err on Side of Caution: False positives acceptable, false negatives are not
Immediate Response: Respond within minutes, not hours
Empathetic Messaging: Use caring, non-judgmental language
Resource Accessibility: Make help easy to access
Privacy Protection: Handle crisis data with extreme sensitivity
Don't Censor: Allow users to express distress (but provide help)
Human Follow-up: Automated detection + human outreach
Cultural Sensitivity: Respect cultural differences in expression
Continuous Monitoring: Track at-risk users over time
Staff Training: Safety team needs mental health training
Never Ignore: Every detection requires action
Document Everything: Confidential logging for legal protection

### Legal & Ethical Considerations

Duty of care to users in crisis
Balance privacy with safety
Clear terms of service about crisis intervention
Staff training and support
Coordination with local emergency services
Liability protection for good-faith interventions
Confidential handling of sensitive data
Informed consent where possible

### Support for Your Team

Working with crisis detection is emotionally demanding:

Regular mental health support for safety team
Clear escalation procedures
Shared responsibility (not one person's burden)
Celebrate positive outcomes
Debrief after difficult cases
Know when to involve professionals
Team support and peer counseling
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: raghulpasupathi
- Version: 1.0.0
## Source health
- Status: healthy
- Item download looks usable.
- Yavira can redirect you to the upstream package for this item.
- Health scope: item
- Reason: direct_download_ok
- Checked at: 2026-04-30T22:21:42.764Z
- Expires at: 2026-05-07T22:21:42.764Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/crisis-detector)
- [Send to Agent page](https://openagent3.xyz/skills/crisis-detector/agent)
- [JSON manifest](https://openagent3.xyz/skills/crisis-detector/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/crisis-detector/agent.md)
- [Download page](https://openagent3.xyz/downloads/crisis-detector)