# Send SpecVibe to your agent
Use the source page and any available docs to guide the install because the item is currently unstable or timing out.
## Fast path
- Open the source page via Review source status.
- If you can obtain the package, extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the source page and extracted files.
## Suggested prompts
### New install

```text
I tried to install a skill package from Yavira, but the item is currently unstable or timing out. Inspect the source page and any extracted docs, then tell me what you can confirm and any manual steps still required. Then review README.md for any prerequisites, environment setup, or post-install checks.
```
### Upgrade existing

```text
I tried to upgrade a skill package from Yavira, but the item is currently unstable or timing out. Compare the source page and any extracted docs with my current installation, then summarize what changed and what manual follow-up I still need. Then review README.md for any prerequisites, environment setup, or post-install checks.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "specvibe",
    "name": "SpecVibe",
    "source": "tencent",
    "type": "skill",
    "category": "效率提升",
    "sourceUrl": "https://clawhub.ai/badideal-2046/specvibe",
    "canonicalUrl": "https://clawhub.ai/badideal-2046/specvibe",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/specvibe",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=specvibe",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "CONTRIBUTING.md",
      "README.md",
      "SKILL.md",
      "references/00-specvibe.md",
      "references/01-schema-and-types.md",
      "references/02-backend.md"
    ],
    "downloadMode": "manual_only",
    "sourceHealth": {
      "source": "tencent",
      "slug": "specvibe",
      "status": "unstable",
      "reason": "timeout",
      "recommendedAction": "retry_later",
      "checkedAt": "2026-05-01T06:59:51.250Z",
      "expiresAt": "2026-05-01T18:59:51.250Z",
      "httpStatus": null,
      "finalUrl": null,
      "contentType": null,
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=specvibe",
        "error": "Timed out after 5000ms",
        "slug": "specvibe"
      },
      "scope": "item",
      "summary": "Item is unstable.",
      "detail": "This item is timing out or returning errors right now. Review the source page and try again later.",
      "primaryActionLabel": "Review source status",
      "primaryActionHref": "https://clawhub.ai/badideal-2046/specvibe"
    },
    "validation": {
      "installChecklist": [
        "Wait for the source to recover or retry later.",
        "Review SKILL.md only after the download returns a real package.",
        "Treat this source as transient until the upstream errors clear."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/specvibe",
    "downloadUrl": "https://openagent3.xyz/downloads/specvibe",
    "agentUrl": "https://openagent3.xyz/skills/specvibe/agent",
    "manifestUrl": "https://openagent3.xyz/skills/specvibe/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/specvibe/agent.md"
  }
}
```
## Documentation

### SpecVibe: The AI-Native Development Framework

This skill provides a universal, seven-stage framework for developing production-ready, AI-native applications. It enforces a "Specification-as-Source-of-Truth" mindset, ensuring that every aspect of the project is defined, testable, secure, and documented before and during implementation, following the most advanced 2026 community best practices from Google, GitHub, and Thoughtworks.

### Core Philosophy

Intent is the Source of Truth: The specification (spec.md) is the primary artifact. Code is the last-mile implementation of that intent.
Human-AI Collaboration: Follow the Delegate/Review/Own model at every stage to maximize efficiency and maintain quality.
Iterate in Small, Validated Chunks: Break down work into the smallest possible units, test them, and commit frequently. Never let the AI generate large, monolithic blocks of code.
Automate Everything: Use tests, linters, CI/CD, and automated documentation to build a robust quality assurance system.

### The Seven Stages of AI-Native Development

Follow these stages sequentially. Each stage has a Quality Gate—a set of questions you must answer before proceeding—and a clear Delegate/Review/Own model for human-AI collaboration.

StageFocusKey ActivitiesReference Guides1. SpecifyUser Journey & RequirementsCreate spec.md defining user stories, goals, and non-functional requirements.references/00-specvibe.md2. PlanTechnical ArchitectureCreate PLAN.md, select tech stack, define architecture, and break down the spec into tasks.references/02-backend.md, references/03-frontend.md3. TestBehavior-Driven DefinitionWrite failing unit, integration, and E2E tests based on the spec and plan.references/05-testing.md4. ImplementCode Generation & RefinementWrite (or generate) code to make the tests pass, following a chunked iteration strategy.references/08-ai-collaboration.md5. ReviewQuality & Security AssuranceConduct automated and human code reviews, focusing on security, logic, and maintainability.references/04-security.md6. DocumentKnowledge CaptureAutomatically generate and manually refine user and developer documentation.references/09-documentation.md7. DeployCI/CD & ObservabilityContainerize, set up CI/CD pipelines, and implement full observability.references/06-devops.md, references/07-error-handling.md

### Stage 1: Specify - The Intent

Goal: Define what to build and why in a structured spec.md.

Delegate: Ask the AI to interview you about the project goals and generate a draft spec.md using the templates/spec-template.md.
Review: Check if the spec accurately captures all user stories, edge cases, and success metrics.
Own: The final approval of the user requirements and business goals.

### Quality Gate 1: Specification Review

Does the spec.md clearly define the user, their problem, and the proposed solution?
Are non-functional requirements (performance, security, accessibility) listed?
Is the scope well-defined and unambiguous for an AI to understand?

### Stage 2: Plan - The Blueprint

Goal: Translate the spec.md into a concrete technical plan.

Delegate: Feed spec.md to the AI and ask it to generate a PLAN.md detailing the architecture, data models (using references/01-schema-and-types.md), API contracts (using templates/openapi-template.yaml), and a task breakdown.
Review: Assess the proposed tech stack, architecture, and task list for feasibility and alignment with best practices.
Own: The final architectural decisions and technology choices.

### Quality Gate 2: Plan Review

Is the chosen architecture appropriate for the project's scale and requirements?
Is the API contract complete and consistent with the data models?
Are the tasks small, independent, and logically sequenced?

### Stage 3: Test - The Safety Net

Goal: Define the application's behavior through a comprehensive, failing test suite.

Delegate: Ask the AI to generate a full suite of tests (unit, integration, E2E) based on spec.md and PLAN.md. Refer to references/05-testing.md.
Review: Ensure tests cover all user stories, API endpoints, and critical business logic. Check for meaningful assertions.
Own: The definition of "done" for each feature, as represented by the tests.

### Quality Gate 3: Test Suite Review

Does every feature in the spec have corresponding tests?
Do all tests currently fail for the correct reasons?

### Stage 4: Implement - The Engine Room

Goal: Write clean, efficient code that makes all tests pass.

Delegate: Instruct the AI to implement one task at a time, feeding it the relevant spec, plan, and failing test. Use the "chunked iteration" strategy from references/08-ai-collaboration.md.
Review: After each small chunk, review the generated code for correctness and style. Do not wait for the entire feature to be complete.
Own: The responsibility for committing each validated chunk of code to version control.

### Quality Gate 4: Implementation Review

Do all tests for the implemented task now pass?
Is the code clean, readable, and consistent with the project's style guide?
Has the change been committed to Git with a clear message?

### Stage 5: Review - The Quality Shield

Goal: Ensure the implemented code is secure, robust, and maintainable.

Delegate: Automate security scans (SAST, DAST, dependency checking) in CI. Use an AI agent to perform a preliminary code review based on references/04-security.md (OWASP 2025).
Review: A human developer must perform a final review, focusing on logic, architecture, and subtle bugs that AI might miss.
Own: The final approval (LGTM) to merge the code into the main branch.

### Quality Gate 5: Code Review

Does the code pass all automated security and quality checks?
Has a human engineer reviewed and approved the changes?

### Stage 6: Document - The Knowledge Base

Goal: Create clear, comprehensive documentation for both users and developers.

Delegate: Use AI to generate initial drafts of API documentation from the OpenAPI spec, and user guides from the spec.md. Refer to references/09-documentation.md.
Review: Edit the AI-generated content for clarity, accuracy, and tone. Add diagrams and examples.
Own: The final, published documentation that serves as the official source of information.

### Quality Gate 6: Documentation Review

Is the API documentation accurate and complete?
Is the user guide easy for a non-technical person to understand?

### Stage 7: Deploy - The Launchpad

Goal: Automate deployment and ensure the application is observable and reliable in production.

Delegate: Ask the AI to generate Dockerfiles, CI/CD pipeline configurations (e.g., GitHub Actions), and infrastructure-as-code scripts. Refer to references/06-devops.md.
Review: Verify the deployment scripts, container configurations, and monitoring setup (references/07-error-handling.md).
Own: The production environment and the ultimate responsibility for uptime and reliability.

### Quality Gate 7: Production Readiness Review

Can the application be deployed and rolled back with a single command?
Is comprehensive, structured logging (OpenTelemetry) in place?
Are alerting and monitoring configured for key performance indicators?
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: badideal-2046
- Version: 1.0.0
## Source health
- Status: unstable
- Item is unstable.
- This item is timing out or returning errors right now. Review the source page and try again later.
- Health scope: item
- Reason: timeout
- Checked at: 2026-05-01T06:59:51.250Z
- Expires at: 2026-05-01T18:59:51.250Z
- Recommended action: Review source status
## Links
- [Detail page](https://openagent3.xyz/skills/specvibe)
- [Send to Agent page](https://openagent3.xyz/skills/specvibe/agent)
- [JSON manifest](https://openagent3.xyz/skills/specvibe/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/specvibe/agent.md)
- [Download page](https://openagent3.xyz/downloads/specvibe)