# Send Virtual Reading Group to your agent
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
## Fast path
- Download the package from Yavira.
- Extract it into a folder your agent can access.
- Paste one of the prompts below and point your agent at the extracted folder.
## Suggested prompts
### New install

```text
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
```
### Upgrade existing

```text
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
```
## Machine-readable fields
```json
{
  "schemaVersion": "1.0",
  "item": {
    "slug": "virtual-reading-group",
    "name": "Virtual Reading Group",
    "source": "tencent",
    "type": "skill",
    "category": "AI 智能",
    "sourceUrl": "https://clawhub.ai/IsonaEi/virtual-reading-group",
    "canonicalUrl": "https://clawhub.ai/IsonaEi/virtual-reading-group",
    "targetPlatform": "OpenClaw"
  },
  "install": {
    "downloadUrl": "/downloads/virtual-reading-group",
    "sourceDownloadUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=virtual-reading-group",
    "sourcePlatform": "tencent",
    "targetPlatform": "OpenClaw",
    "packageFormat": "ZIP package",
    "primaryDoc": "SKILL.md",
    "includedAssets": [
      "SKILL.md",
      "assets/synthesis-template.md",
      "references/default-personas.md",
      "references/paper-notes-template.md",
      "references/workflow.md"
    ],
    "downloadMode": "redirect",
    "sourceHealth": {
      "source": "tencent",
      "status": "healthy",
      "reason": "direct_download_ok",
      "recommendedAction": "download",
      "checkedAt": "2026-04-30T16:55:25.780Z",
      "expiresAt": "2026-05-07T16:55:25.780Z",
      "httpStatus": 200,
      "finalUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
      "contentType": "application/zip",
      "probeMethod": "head",
      "details": {
        "probeUrl": "https://wry-manatee-359.convex.site/api/v1/download?slug=network",
        "contentDisposition": "attachment; filename=\"network-1.0.0.zip\"",
        "redirectLocation": null,
        "bodySnippet": null
      },
      "scope": "source",
      "summary": "Source download looks usable.",
      "detail": "Yavira can redirect you to the upstream package for this source.",
      "primaryActionLabel": "Download for OpenClaw",
      "primaryActionHref": "/downloads/virtual-reading-group"
    },
    "validation": {
      "installChecklist": [
        "Use the Yavira download entry.",
        "Review SKILL.md after the package is downloaded.",
        "Confirm the extracted package contains the expected setup assets."
      ],
      "postInstallChecks": [
        "Confirm the extracted package includes the expected docs or setup files.",
        "Validate the skill or prompts are available in your target agent workspace.",
        "Capture any manual follow-up steps the agent could not complete."
      ]
    }
  },
  "links": {
    "detailUrl": "https://openagent3.xyz/skills/virtual-reading-group",
    "downloadUrl": "https://openagent3.xyz/downloads/virtual-reading-group",
    "agentUrl": "https://openagent3.xyz/skills/virtual-reading-group/agent",
    "manifestUrl": "https://openagent3.xyz/skills/virtual-reading-group/agent.json",
    "briefUrl": "https://openagent3.xyz/skills/virtual-reading-group/agent.md"
  }
}
```
## Documentation

### Virtual Reading Group

Orchestrate parallel expert agents to read papers, discuss findings, challenge each other's interpretations, and synthesize an integrated discussion document with traceable citations.

### Quick Start

Minimum inputs required:

Research question — the lens through which papers are analyzed
Paper list — paths to PDFs/text files, or paper descriptions for web lookup
Output directory — where all outputs are written

Optional inputs:

Custom expert personas (default: see references/default-personas.md)
Custom junior researcher persona
Language preference (default: English)
Number of experts (default: auto-calculated from paper count)

### Workflow Overview

The skill runs 4 sequential phases. Each phase must complete before the next begins.

PhaseAgentsInputOutput1. Paper ReadingN experts (parallel)Papers + research question{AuthorYear}_notes.md, {Expert}_session_summary.md2. Junior Discussion1 junior researcherAll Phase 1 outputs{Junior}_discussion.md3. Expert ResponsesN experts (parallel)Phase 2 output + other experts' summaries{Expert}_response_to_{Junior}.md4. Synthesis1 synthesizerAll previous outputsIntegrated_Discussion_Summary.md

For detailed prompts and phase specifications: Read references/workflow.md.

### Orchestration Procedure

⚠️ Important: The prompts below are abbreviated summaries. For full prompt templates that produce quality output, use references/workflow.md. The pseudocode blocks show orchestration structure — adapt to your actual sub-agent spawning mechanism.

### 1. Validate Inputs

- Confirm research question is specified
- Confirm paper list is non-empty
- Confirm output directory exists or create it
- Load personas from user input or references/default-personas.md

### 2. Calculate Expert Assignment

Determine number of experts and paper batches:

if paper_count <= 4:
    num_experts = 1
elif paper_count <= 10:
    num_experts = 2
elif paper_count <= 20:
    num_experts = min(4, ceil(paper_count / 5))
else:
    num_experts = min(8, ceil(paper_count / 5))

Distribute papers evenly across experts (max 5 per expert).

# ⚠️ Context contamination warning: assigning >5 papers per expert degrades
# note quality — later papers in the batch get shallower treatment as context
# fills up. Prefer 3-5 papers per agent for best results.

### 3. Execute Phase 1 — Paper Reading (Parallel)

For each expert, spawn a sub-agent with:

Label: expert-reader-{expert_name}
Model: opus (or sonnet for budget)
Core instructions:

Read assigned papers through research question lens
Write notes using references/paper-notes-template.md
Save as {output_dir}/{AuthorYear}_notes.md
Write session summary with cross-cutting themes
Critical: Quote specific passages with section labels — all claims must be traceable

📄 Full prompt template: See references/workflow.md → Phase 1

Wait for all Phase 1 agents to complete before proceeding.

### 4. Execute Phase 2 — Junior Discussion (Single Agent)

Spawn single agent with:

Label: junior-discussion
Model: opus (required — needs strong reasoning)
Core instructions:

Read all Phase 1 outputs (notes + summaries)
For each paper: summarize claims, pose challenging questions to each expert
Generate Grand Questions: 3 unsolved problems, 2 testable hypotheses, 2 methodological gaps
Reference specific passages — be intellectually provocative

📄 Full prompt template: See references/workflow.md → Phase 2

Wait for Phase 2 to complete before proceeding.

### 5. Execute Phase 3 — Expert Responses (Parallel)

For each expert, spawn a sub-agent with:

Label: expert-response-{expert_name}
Model: opus (recommended)
Core instructions:

Read junior's discussion + other experts' summaries + own notes
Respond to each question directed at them (150-300 words per response)
Reference specific paper passages, engage with other expert's perspective
Respond to Grand Questions from their domain expertise
Be collegial but intellectually rigorous — disagree where warranted

📄 Full prompt template: See references/workflow.md → Phase 3

Wait for all Phase 3 agents to complete before proceeding.

### 6. Execute Phase 4 — Synthesis (Single Agent)

Spawn single agent with:

Label: synthesis
Model: opus (required — complex reasoning)
Core instructions:

Read ALL files from Phases 1-3
Follow assets/synthesis-template.md structure
Organize by THEME, not by paper or speaker
Every claim attributed: [Expert_A]/[Expert_B]/[Junior] + (PaperCode, §Section)
Include: Points of Consensus, Points of Disagreement, Open Questions
Synthesize, don't summarize — find the intellectual threads

📄 Full prompt template: See references/workflow.md → Phase 4

### 7. Report Completion

List all generated files and provide a brief summary of the discussion themes.

### Deeper Discussion

If user wants experts to expand on specific points:

Spawn new expert response agent(s) with targeted follow-up questions
Re-run Phase 4 synthesis including the additional responses

### Second Round

For a full second round (new questions, new responses):

Rename Phase 2-4 outputs with round suffix (e.g., Chen_discussion_r1.md)
Re-run Phase 2 with instruction to build on previous round
Continue through Phases 3-4

### Recovery from Partial Run

If a phase fails:

Check error handling in references/workflow.md
Retry failed agent(s) individually
Continue from last successful phase (outputs are saved incrementally)

### File Naming Conventions

File TypePatternExamplePaper notes{FirstAuthorLastName}{Year}_notes.mdChen2024_notes.mdExpert summary{ExpertLastName}_session_summary.mdLin_session_summary.mdJunior discussion{JuniorLastName}_discussion.mdChen_discussion.mdExpert response{ExpertLastName}_response_to_{JuniorLastName}.mdLin_response_to_Chen.mdSynthesisIntegrated_Discussion_Summary.md—

### Citation Requirements

Enforce in all agent prompts:

Every factual claim must reference a paper
Use format: (AuthorYear, §Section) or (AuthorYear, p.X)
Direct quotes must include section/page
Discussion claims must attribute speaker: [Expert_A], [Expert_B], [Junior]

### ⚠️ Anti-Fabrication Rule (Critical)

Never fabricate citations. If an agent cannot find the exact passage in the source text:

Leave the field blank or write <!-- source not found -->
Do NOT paraphrase and present it as a quote
Do NOT infer what the paper "probably says"

Fabricated citations are worse than missing citations — they corrupt the knowledge base silently. Accuracy > Coverage.

### No Source = No Notes

If a paper has no PDF or markdown source available:

Write a placeholder note with status 📭 未讀
Leave all content sections blank
Do NOT attempt to write notes from memory or web search results

Only write substantive notes when the actual source document is accessible.

### Scaling Guidelines

PapersExpertsBatchesEstimated Time1-61115-20 min7-122220-30 min13-243-43-430-45 min25-504-85-845-90 min

### Custom Personas

Replace default personas by providing:

Expert A: Dr. [Name], [Role]. Background in [X]. 
Emphasizes [methodology/perspective]. Skeptical of [Y].
Tone: [collegial/rigorous/provocative].

Expert B: Dr. [Name], [Role]. Background in [X].
...

See references/default-personas.md for complete templates.

### Language

Pass the language parameter when invoking the orchestration:

All agent prompts include Language: {language} instruction
Agents read papers and write outputs in the specified language
Default: English

Example: "Run the reading group in Japanese" → adds Language: Japanese to all phase prompts.

### Model Selection

Model choice significantly impacts output quality and cost:

ConfigurationPhasesQualityCostUse WhenFull opusAll phases use opusHighest$$$Publication-quality analysis, complex papersMixedPhase 1: sonnet, Phases 2-4: opusHigh$$Good balance — reading is less reasoning-intensiveBudgetAll phases use sonnetMedium$Quick exploration, simpler papers

Recommendations:

Phase 2 (Junior Discussion) benefits most from opus — requires synthesizing multiple papers and generating non-obvious questions
Phase 4 (Synthesis) also benefits from opus — thematic organization requires complex reasoning
Phase 1 (Reading) can use sonnet if papers aren't highly technical
Phase 3 (Responses) can use sonnet if questions are straightforward

### Integration

This skill is standalone but works well with paper collection workflows:

literature-manager or similar skills: Use to gather and organize papers first, then pass the collection to virtual-reading-group
PDF extraction tools: Pre-extract text from PDFs if agents have trouble reading them directly

### References

references/workflow.md — Detailed phase specifications and full prompt templates
references/default-personas.md — Ready-to-use expert and junior researcher personas
references/paper-notes-template.md — Template for individual paper notes

### Assets

assets/synthesis-template.md — Structure for the final integrated discussion summary
## Trust
- Source: tencent
- Verification: Indexed source record
- Publisher: IsonaEi
- Version: 1.1.0
## Source health
- Status: healthy
- Source download looks usable.
- Yavira can redirect you to the upstream package for this source.
- Health scope: source
- Reason: direct_download_ok
- Checked at: 2026-04-30T16:55:25.780Z
- Expires at: 2026-05-07T16:55:25.780Z
- Recommended action: Download for OpenClaw
## Links
- [Detail page](https://openagent3.xyz/skills/virtual-reading-group)
- [Send to Agent page](https://openagent3.xyz/skills/virtual-reading-group/agent)
- [JSON manifest](https://openagent3.xyz/skills/virtual-reading-group/agent.json)
- [Markdown brief](https://openagent3.xyz/skills/virtual-reading-group/agent.md)
- [Download page](https://openagent3.xyz/downloads/virtual-reading-group)