Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Design and implement adaptive testing systems using Item Response Theory (IRT). Use when working with computerized adaptive tests (CAT), psychometric assessm...
Design and implement adaptive testing systems using Item Response Theory (IRT). Use when working with computerized adaptive tests (CAT), psychometric assessm...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Design computerized adaptive tests that measure ability efficiently and accurately using Item Response Theory.
Adaptive tests adjust difficulty in real-time based on student responses. A correct answer โ harder question. Incorrect โ easier question. The result: accurate ability estimates in ~50% fewer questions than fixed-length tests. Key advantage: Traditional tests waste time on too-easy or too-hard questions. Adaptive tests spend time where measurement matters most โ near the student's ability level.
You need to...SeeUnderstand IRT models and parametersIRT FundamentalsDesign a new adaptive testTest Design WorkflowChoose item selection algorithmItem SelectionDecide when to stop the testStopping RulesCalibrate new questionsreferences/calibration.mdImplement CAT algorithmreferences/implementation.md
Most adaptive tests use the 3PL model. Each question has three parameters: a (discrimination) โ How well the question differentiates ability levels. Higher = steeper curve. Typical range: 0.5 to 2.5 b (difficulty) โ The ability level where P(correct) = 0.5. Range: -3 to +3 (standardized scale) c (guessing) โ Probability of guessing correctly. Usually 0.2 to 0.25 for multiple choice Probability of correct response: P(correct | ability, a, b, c) = c + (1 - c) / (1 + e^(-a(ability - b))) Simpler models: 2PL: Set c = 0 (no guessing parameter) 1PL (Rasch): Set c = 0 and a = 1 for all items (only difficulty varies) Use 3PL for high-stakes tests. Use 2PL/1PL when sample size is small (<500 responses per item).
Information measures how precisely an item estimates ability at a given level. Peak information occurs when ability โ difficulty (b parameter). Standard Error (SE) is the inverse of information: SE = 1 / sqrt(Information) Goal of CAT: Maximize information (minimize SE) at the student's true ability level.
Purpose: Placement, diagnostic, certification, progress monitoring? Content domain: Single skill or multidimensional? Target population: What ability range (-3 to +3)? Constraints: Time limit, minimum/maximum length, content balance
Minimum bank size: 10ร the average test length. For a 20-item CAT, you need โฅ200 calibrated items. Distribution targets: Difficulty (b): Spread across expected ability range Discrimination (a): Target 1.0 to 2.0 (high discrimination) Exposure: No item used >20% of the time Content balancing: If testing math, ensure geometry/algebra/etc. are proportionally represented.
Pick one from each category: Item selection: (see below) Maximum Information Randomesque (MFI + exposure control) Content balancing Ability estimation: Maximum Likelihood Estimation (MLE) Expected A Posteriori (EAP) โ better for extreme scores Weighted Likelihood (WLE) Stopping rule: (see below) Fixed length Standard error threshold Information threshold
Before going live, simulate 1000+ test sessions with known abilities. Check: Average test length SE at different ability levels Item exposure rates Content balance adherence Adjust if needed.
Rule: Select the item with highest information at current ability estimate. Pros: Optimal precision, shortest tests Cons: Overuses "best" items, poor security Use when: Pilot testing, low-stakes practice
Rule: Select from top N items by information (e.g., top 5), choose randomly from that set. Pros: Balances precision and security Cons: Slightly longer tests than pure MFI Use when: Operational tests, default choice
Rule: Start with high-discrimination items (high a), use mid-discrimination later. Pros: Fast initial ability estimate Cons: Complex to implement Use when: Very large item banks, research settings
Rule: Track content area usage, prioritize underrepresented areas when selecting next item. Implementation: Weight information by content constraint satisfaction. Use when: Blueprint requirements, multidimensional tests
Stop after N items (e.g., 20 questions). Pros: Predictable time, simple Cons: May over/under-test some students Use when: Time limits matter, simple implementation needed
Stop when SE < target (e.g., SE < 0.3). Pros: Consistent precision across ability levels Cons: Variable test length (harder to schedule) Typical targets: Low-stakes: SE < 0.4 Medium-stakes: SE < 0.3 High-stakes: SE < 0.25 Use when: Precision matters more than time
Stop when (SE < target) OR (length โฅ max) OR (length โฅ min AND ability estimate stable). Use when: Production systems (safest approach)
Options: Population mean (ฮธ = 0) Prior information (e.g., grade level, previous test) First question is medium difficulty, estimate from there Never start at extremes (-3 or +3).
All correct or all incorrect: MLE fails. Use EAP or Bayesian prior to regularize. Rapid changes: If ability estimate jumps >1.0, consider response anomaly (cheating, guessing).
Track how often each item is used. Flag items used >20% of the time. Consider: Randomesque selection (above) Sympson-Hetter method (advanced) Periodic item bank refresh
If testing multiple skills (e.g., algebra + geometry), use separate ability estimates per dimension. Select items to balance information across dimensions. Warning: MIRT requires larger item banks and more complex calibration.
โ Too few items in bank โ High exposure, security risk โ Aim for 10ร average test length โ Poorly distributed difficulties โ Accurate only in narrow ability range โ Spread items across -2 to +2 difficulty โ Ignoring content balance โ May skip important topics โ Build content constraints into item selection โ Using MLE for all incorrect โ Returns -โ โ Use EAP or cap estimates at -3/+3 โ No exposure control โ Same items every test โ Use randomesque or Sympson-Hetter
NeedFileCalibrate new items (collect data, estimate parameters)references/calibration.mdImplement CAT algorithm (code patterns, libraries)references/implementation.md
Setup: Item bank: 300 questions, b from -2 (basic) to +2 (advanced) Target: SE < 0.35 or max 25 questions Content: 40% algebra, 30% geometry, 30% statistics Algorithm: Randomesque (top 5), EAP estimation Flow: Start at ฮธ = 0 (grade-level average) Select item: b โ 0, content area needed Student answers โ update ability estimate (EAP) Select next: maximize information at new ฮธ, respect content balance, randomesque from top 5 Stop when SE < 0.35 or 25 questions reached Report: ability estimate + placement recommendation Result: Average 18 questions, 95% of students placed within ยฑ0.5 grade levels of true ability.
Lord, F. M. (1980). Applications of Item Response Theory to Practical Testing Problems Wainer, H. (2000). Computerized Adaptive Testing: A Primer (2nd ed.) van der Linden, W. J., & Glas, C. A. W. (2010). Elements of Adaptive Testing IRT packages: Python: mirt, girth, catsim R: mirt, TAM, catR Production: Custom implementation or AdaptiveTest.io
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.