# Send Csam Shield to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "csam-shield",
    "name": "Csam Shield",
    "source": "tencent",
    "type": "skill",
    "category": "效率提升",
    "sourceUrl": "https://clawhub.ai/raghulpasupathi/csam-shield",
    "canonicalUrl": "https://clawhub.ai/raghulpasupathi/csam-shield",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/csam-shield",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=csam-shield",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "SKILL.md"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "slug": "csam-shield",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-29T15:37:04.700Z",
      "expiresAt": "2026-05-06T15:37:04.700Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=csam-shield",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=csam-shield",
        "contentDisposition": "attachment; filename=\"csam-shield-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null,
        "slug": "csam-shield"
      },
      "scope": "item",
      "summary": "Item download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this item.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/csam-shield"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/csam-shield",
    "downloadUrl": "https://openagent3.xyz/downloads/csam-shield",
    "agentUrl": "https://openagent3.xyz/skills/csam-shield/agent",
    "manifestUrl": "https://openagent3.xyz/skills/csam-shield/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/csam-shield/agent.md"
  }
}
```
## Documentation

### Metadata

ID: csam-shield
Version: 1.0.0
Category: safety
Priority: CRITICAL
Installation: npm
Package: @raghulpasupathi/csam-shield

### Description

CRITICAL SAFETY SYSTEM for detecting and preventing Child Sexual Abuse Material (CSAM). Uses advanced computer vision, hash matching, age estimation, and behavior analysis to identify illegal content. Includes automatic NCMEC reporting, law enforcement coordination, and evidence preservation.

### ⚠️ CRITICAL WARNING

This skill handles the most serious form of online abuse. Implementation requires:

Zero tolerance policy - immediate action on detection
Legal compliance - mandatory reporting to NCMEC/law enforcement
Evidence preservation - secure logging for legal proceedings
Staff protection - mental health support for reviewers
Privacy protection - strict handling of detected content

### Features

Hash Database Matching: PhotoDNA, PDQ, perceptual hashing against known CSAM
Age Estimation: AI-powered age detection with high accuracy
Anatomical Detection: Identify inappropriate imagery of minors
Context Analysis: Distinguish legitimate from exploitative content
Behavioral Analysis: Detect grooming patterns and predatory behavior
NCMEC Integration: Automatic CyberTipline reporting
Evidence Preservation: Secure storage for law enforcement
User Flagging: Immediate account suspension and investigation
Network Analysis: Identify distribution rings and patterns
Real-time Blocking: Prevent upload/distribution instantly

### Via ClawHub

https://clawhub.ai/raghulpasupathi/csam-shield

### Via npm

npm install @raghulpasupathi/csam-shield

### Configuration

{
  "enabled": true,
  "settings": {
    "mode": "maximum-protection",
    "zeroTolerance": true,
    "thresholds": {
      "ageEstimation": {
        "childThreshold": 13,
        "teenThreshold": 18,
        "confidence": 0.85
      },
      "hashMatch": {
        "exactMatch": 0.95,
        "nearMatch": 0.85
      },
      "anatomicalDetection": {
        "sensitivity": "maximum",
        "blockThreshold": 0.70
      },
      "contextAnalysis": {
        "enabled": true,
        "legitimateExceptions": ["family", "medical", "educational"]
      }
    },
    "databases": {
      "photoDNA": {
        "enabled": true,
        "provider": "microsoft",
        "updateFrequency": "hourly"
      },
      "pdqHash": {
        "enabled": true,
        "provider": "facebook",
        "updateFrequency": "hourly"
      },
      "ncmec": {
        "enabled": true,
        "hashList": true,
        "updateFrequency": "hourly"
      },
      "custom": {
        "enabled": true,
        "path": "/secure/csam-hashes/"
      }
    },
    "detection": {
      "imageAnalysis": true,
      "videoAnalysis": true,
      "textAnalysis": true,
      "metadataAnalysis": true,
      "networkAnalysis": true,
      "behaviorAnalysis": true
    },
    "reporting": {
      "ncmec": {
        "enabled": true,
        "endpoint": "https://report.cybertip.org/",
        "apiKey": "${NCMEC_API_KEY}",
        "automatic": true
      },
      "lawEnforcement": {
        "enabled": true,
        "contacts": ["fbi_tips", "local_police"],
        "automatic": false,
        "requiresReview": true
      },
      "preserveEvidence": true,
      "evidenceRetention": "indefinite",
      "encryptEvidence": true
    },
    "actions": {
      "onDetection": [
        "block_content",
        "suspend_user",
        "preserve_evidence",
        "report_ncmec",
        "alert_security_team",
        "block_ip",
        "flag_related_accounts"
      ],
      "onHashMatch": [
        "immediate_block",
        "auto_report_ncmec",
        "permanent_ban",
        "preserve_all_user_content",
        "notify_authorities"
      ]
    },
    "security": {
      "accessControl": "restricted",
      "auditLogging": "complete",
      "encryption": "aes-256",
      "staffProtection": true,
      "limitedExposure": true
    }
  }
}

### API / Methods

const CSAMShield = require('@raghulpasupathi/csam-shield');

// Initialize with strict security
const shield = new CSAMShield({
  mode: 'maximum-protection',
  ncmecApiKey: process.env.NCMEC_API_KEY,
  encryptionKey: process.env.EVIDENCE_ENCRYPTION_KEY
});

// ⚠️ CRITICAL: Analyze content (use with extreme caution)
const result = await shield.analyze('/path/to/content.jpg');
console.log(result);
/* Output:
{
  threat: 'CRITICAL',
  action: 'IMMEDIATE_BLOCK',
  detectionType: 'hash_match',
  confidence: 0.98,
  details: {
    hashMatch: {
      matched: true,
      database: 'photoDNA',
      matchConfidence: 0.99
    },
    ageEstimation: {
      estimatedAge: 10,
      confidence: 0.94,
      isMinor: true
    },
    anatomicalDetection: {
      inappropriate: true,
      severity: 'extreme'
    },
    context: {
      isLegitimate: false,
      category: 'exploitative'
    }
  },
  actions: {
    contentBlocked: true,
    userSuspended: true,
    evidencePreserved: true,
    ncmecReported: true,
    reportId: 'NCMEC-2026-xxxxx',
    authoritiesNotified: true
  },
  evidence: {
    caseId: 'CASE-2026-xxxxx',
    preservedData: [
      'content_hash',
      'user_info',
      'upload_metadata',
      'ip_address',
      'device_info'
    ],
    encryptedStorage: '/secure/evidence/CASE-2026-xxxxx/'
  },
  timestamp: '2026-02-20T10:30:00Z'
}
*/

// Check hash against known CSAM databases
const hashCheck = await shield.checkHash(contentHash);
console.log(hashCheck);
/* Output:
{
  isKnownCSAM: true,
  matchedDatabases: ['photoDNA', 'pdqHash', 'ncmec'],
  matchConfidence: 0.99,
  action: 'IMMEDIATE_BLOCK',
  reportRequired: true
}
*/

// Estimate age in image
const ageEstimation = await shield.estimateAge('/path/to/image.jpg');
console.log(ageEstimation);
/* Output:
{
  estimatedAge: 12,
  confidence: 0.91,
  ageRange: [10, 14],
  isMinor: true,
  certaintyLevel: 'high'
}
*/

// Analyze user behavior for grooming patterns
const behaviorAnalysis = await shield.analyzeBehavior(userId, {
  messages: userMessages,
  interactions: userInteractions,
  timeline: activityTimeline
});
console.log(behaviorAnalysis);
/* Output:
{
  isGrooming: true,
  confidence: 0.87,
  patterns: [
    'age_inquiries',
    'isolation_attempts',
    'gift_offering',
    'secrecy_requests',
    'progressive_boundary_crossing'
  ],
  riskLevel: 'extreme',
  recommendedAction: 'immediate_investigation'
}
*/

// Report to NCMEC CyberTipline
const ncmecReport = await shield.reportToNCMEC({
  content: contentDetails,
  user: userDetails,
  evidence: preservedEvidence
});
console.log(ncmecReport);
/* Output:
{
  success: true,
  reportId: 'NCMEC-2026-xxxxx',
  timestamp: '2026-02-20T10:30:00Z',
  status: 'submitted',
  followUp: 'pending_review'
}
*/

// Preserve evidence for legal proceedings
const evidence = await shield.preserveEvidence({
  contentId: 'content-123',
  userId: 'user-456',
  includeMetadata: true,
  includeRelatedContent: true,
  includeUserHistory: true
});

// Suspend user and related accounts
await shield.suspendUser(userId, {
  reason: 'CSAM_DETECTION',
  permanent: true,
  blockRelatedAccounts: true,
  preserveEvidence: true
});

// Network analysis to find related accounts
const network = await shield.analyzeNetwork(userId);
console.log(network);
/* Output:
{
  suspiciousAccounts: [
    { userId: 'user-789', riskScore: 0.92, connection: 'frequent_messages' },
    { userId: 'user-012', riskScore: 0.85, connection: 'content_sharing' }
  ],
  distributionRing: {
    detected: true,
    size: 7,
    accounts: [...]
  },
  recommendedActions: [
    'investigate_all_accounts',
    'preserve_all_evidence',
    'notify_authorities'
  ]
}
*/

// Secure hash generation (for reporting only)
const secureHash = await shield.generateSecureHash('/path/to/content.jpg');

// Update hash databases
await shield.updateHashDatabases();

// Event listeners (CRITICAL - requires immediate response)
shield.on('csam_detected', async (detection) => {
  console.error('🚨 CRITICAL: CSAM DETECTED');

  // Immediate actions
  await shield.blockContent(detection.contentId);
  await shield.suspendUser(detection.userId);
  await shield.preserveEvidence(detection);
  await shield.reportToNCMEC(detection);
  await shield.notifySecurityTeam(detection);
  await shield.alertAuthorities(detection);
});

shield.on('hash_match', async (match) => {
  console.error('🚨 CRITICAL: Known CSAM hash matched');

  // Automatic immediate actions
  await shield.executeEmergencyProtocol(match);
});

shield.on('grooming_detected', async (behavior) => {
  console.warn('⚠️ WARNING: Potential grooming behavior detected');

  // Investigation and monitoring
  await shield.flagForInvestigation(behavior.userId);
  await shield.enhanceMonitoring(behavior.userId);
});

// Secure audit logging
const auditLog = await shield.getAuditLog({
  type: 'csam_detection',
  timeRange: 'last_30_days',
  includeReports: true
});

// Staff protection - limited exposure mode
shield.enableStaffProtection({
  blurContent: true,
  limitedDetails: true,
  rotationSchedule: true,
  mentalHealthSupport: true
});

// Compliance reporting
const complianceReport = await shield.generateComplianceReport({
  period: 'monthly',
  includeStatistics: true,
  includeActions: true,
  format: 'legal'
});

### Dependencies

@microsoft/photodna: ^2.0.0 - PhotoDNA hashing
pdq-hash: ^1.0.0 - Facebook PDQ hashing
@tensorflow/tfjs-node-gpu: ^4.0.0 - Age estimation models
opencv4nodejs: ^6.0.0 - Image analysis
ncmec-reporter: ^1.0.0 - NCMEC CyberTipline integration
crypto: Built-in - Evidence encryption

### Performance

Hash Matching: <10ms (database lookup)
Age Estimation: 100-200ms per image
Full Analysis: 200-500ms per image
Video Analysis: Real-time frame scanning
Accuracy:

Hash matching: 99.9% (known CSAM)
Age estimation: 92% accuracy (±2 years)
Context analysis: 89% accuracy
False positive rate: <0.01% (strict to prevent abuse)

### Legal Requirements

Mandatory Reporting: Report all detected CSAM to NCMEC (18 USC § 2258A)
Evidence Preservation: Retain evidence for law enforcement (90+ days minimum)
No Distribution: Never distribute detected CSAM, even internally
User Notification: Do NOT notify user of detection (obstruction warning)
Law Enforcement Cooperation: Full cooperation with investigations
International Compliance: Comply with local laws (IWF, INHOPE, etc.)

### Use Cases

Social media platforms
Messaging applications
File sharing services
Cloud storage providers
Dating applications
Gaming platforms with UGC
Forum and community sites
Any platform allowing user uploads

### False Positives

Problem: Legitimate content flagged as CSAM
Solution:

Review context analysis results
Check for family/medical/educational context
Manual review by trained staff ONLY
Document false positive for model improvement
NEVER automatically ignore - always review
Consider legitimate use cases in detection logic

### Missing Known CSAM

Problem: Hash databases not catching known content
Solution:

Verify database updates are running hourly
Check all hash databases enabled
Ensure proper API keys configured
Test hash generation process
Verify network connectivity to update servers
Contact database providers for troubleshooting

### NCMEC Reporting Failures

Problem: Reports not submitting to NCMEC
Solution:

Verify API credentials
Check network connectivity
Queue reports for retry
Manual submission if automatic fails
Contact NCMEC technical support
Keep local evidence regardless of submission status

### Age Estimation Inaccuracy

Problem: Age estimation giving unreliable results
Solution:

Use as one signal, not sole determinant
Combine with other detection methods
Lower confidence threshold for safety
Update age estimation models regularly
Consider edge cases (appearing older/younger)
When in doubt, err on side of caution

### Evidence Storage Issues

Problem: Evidence not being preserved correctly
Solution:

Verify encryption keys configured
Check storage permissions and space
Test evidence retrieval process
Implement redundant storage
Regular backup verification
Consult legal team on retention requirements

### Integration Example

// ⚠️ CRITICAL SYSTEM INTEGRATION
const express = require('express');
const multer = require('multer');
const CSAMShield = require('@raghulpasupathi/csam-shield');

const app = express();
const upload = multer({ dest: '/secure/temp/' });
const shield = new CSAMShield({
  mode: 'maximum-protection',
  ncmecApiKey: process.env.NCMEC_API_KEY
});

// Critical: Pre-upload hash check
app.post('/api/upload', upload.single('file'), async (req, res) => {
  const tempPath = req.file.path;

  try {
    // Generate hash immediately
    const contentHash = await shield.generateSecureHash(tempPath);

    // Check against known CSAM databases FIRST
    const hashCheck = await shield.checkHash(contentHash);

    if (hashCheck.isKnownCSAM) {
      // CRITICAL: Known CSAM detected
      console.error('🚨 CRITICAL: Known CSAM hash matched');

      // Preserve evidence
      await shield.preserveEvidence({
        contentHash,
        userId: req.user.id,
        ip: req.ip,
        uploadAttempt: true,
        timestamp: new Date()
      });

      // Automatic NCMEC report
      await shield.reportToNCMEC({
        type: 'known_csam_upload',
        hash: contentHash,
        user: req.user,
        ip: req.ip
      });

      // Suspend user immediately
      await shield.suspendUser(req.user.id, {
        reason: 'CSAM_UPLOAD',
        permanent: true
      });

      // Delete file securely
      await shield.secureDelete(tempPath);

      // DO NOT reveal reason to user
      return res.status(400).json({
        success: false,
        error: 'Upload failed. Please contact support.'
      });
    }

    // Perform full analysis
    const analysis = await shield.analyze(tempPath);

    if (analysis.threat === 'CRITICAL') {
      // New CSAM detected
      console.error('🚨 CRITICAL: Potential CSAM detected');

      // Execute emergency protocol
      await shield.executeEmergencyProtocol({
        content: tempPath,
        user: req.user,
        analysis: analysis
      });

      // DO NOT reveal reason to user
      return res.status(400).json({
        success: false,
        error: 'Upload failed. Please contact support.'
      });
    }

    // Content passed all checks
    const url = await uploadToStorage(tempPath);

    res.json({
      success: true,
      url: url
    });
  } catch (error) {
    console.error('CSAM Shield error:', error);

    // Fail closed - reject upload
    res.status(500).json({
      success: false,
      error: 'Upload failed. Please try again.'
    });
  } finally {
    // Always clean up temp file
    if (fs.existsSync(tempPath)) {
      await shield.secureDelete(tempPath);
    }
  }
});

// Background monitoring of existing content
async function scanExistingContent() {
  console.log('Starting periodic content scan...');

  const contentBatch = await getContentForScanning(1000);

  for (const content of contentBatch) {
    try {
      const hash = await shield.generateSecureHash(content.url);
      const check = await shield.checkHash(hash);

      if (check.isKnownCSAM) {
        console.error(\`🚨 CRITICAL: Known CSAM found in existing content: ${content.id}\`);

        // Execute emergency protocol
        await shield.executeEmergencyProtocol({
          contentId: content.id,
          userId: content.userId,
          discoveryMethod: 'periodic_scan'
        });
      }
    } catch (error) {
      console.error(\`Error scanning content ${content.id}:\`, error);
    }
  }
}

// Run hourly scans
setInterval(scanExistingContent, 60 * 60 * 1000);

// Admin dashboard (RESTRICTED ACCESS)
app.get('/admin/csam/dashboard', requireSecurityClearance, async (req, res) => {
  const stats = await shield.getStats({
    period: '30d',
    includeReports: true
  });

  res.json({
    success: true,
    stats: stats,
    warning: 'RESTRICTED: Security clearance required'
  });
});

// Compliance reporting (LEGAL TEAM ONLY)
app.get('/legal/csam/compliance-report', requireLegalAccess, async (req, res) => {
  const report = await shield.generateComplianceReport({
    period: req.query.period || 'monthly',
    format: 'legal'
  });

  res.json({
    success: true,
    report: report
  });
});

### Best Practices (CRITICAL)

Zero Tolerance: No exceptions, immediate action on detection
Report Everything: When in doubt, report to NCMEC
Preserve Evidence: Secure storage for law enforcement
Staff Protection: Mental health support, limited exposure
Never Distribute: Don't share detected content internally
Legal Compliance: Follow all mandatory reporting laws
User Privacy: Balance detection with legitimate user privacy
Regular Updates: Keep hash databases current (hourly)
Audit Everything: Complete logging for legal proceedings
Encryption: Encrypt all evidence and sensitive data
Access Control: Strict role-based access to systems
Cooperation: Full cooperation with law enforcement

### Emergency Contacts

NCMEC CyberTipline: 1-800-843-5678 / report.cybertip.org
FBI IC3: ic3.gov
Interpol: interpol.int/Crimes/Crimes-against-children
IWF (UK): iwf.org.uk
INHOPE: inhope.org

### Mental Health Resources (for staff)

Working with CSAM detection is traumatic. Provide:

Regular counseling services
Rotation schedules
Debriefing sessions
Time off after exposure
Peer support groups
24/7 crisis support
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: raghulpasupathi
- Version: 1.0.0
## Source health
- Status: healthy
- Item download looks usable.
- Yavira can redirect you to the upstream package for this item.
- Health scope: item
- Reason: direct_download_ok
- Checked at: 2026-04-29T15:37:04.700Z
- Expires at: 2026-05-06T15:37:04.700Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/csam-shield)
- [Send to Agent page](https://openagent3.xyz/skills/csam-shield/agent)
- [JSON manifest](https://openagent3.xyz/skills/csam-shield/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/csam-shield/agent.md)
- [Download page](https://openagent3.xyz/downloads/csam-shield)