Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Multi-agent coding orchestrator using Gas Town (gt) and Claude Code. Use for ANY non-trivial coding task — multi-file changes, new features, refactors, bug fixes, anything involving code that needs to compile/run/test. Delegates work to parallel Claude Code agents (polecats) with git-backed persistent state, work tracking (beads), and coordination. Use when a task involves more than a single file edit or quick script.
Multi-agent coding orchestrator using Gas Town (gt) and Claude Code. Use for ANY non-trivial coding task — multi-file changes, new features, refactors, bug fixes, anything involving code that needs to compile/run/test. Delegates work to parallel Claude Code agents (polecats) with git-backed persistent state, work tracking (beads), and coordination. Use when a task involves more than a single file edit or quick script.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Then review README.md for any prerequisites, environment setup, or post-install checks. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Then review README.md for any prerequisites, environment setup, or post-install checks. Summarize what changed and any follow-up checks I should run.
Multi-agent orchestration system for Claude Code with persistent work tracking Gas Town is a workspace manager that coordinates multiple Claude Code agents working on different tasks. Instead of losing context when agents restart, Gas Town persists work state in git-backed hooks, enabling reliable multi-agent workflows.
Core Identity Key Operational Principles Architecture Overview Role Taxonomy Core Concepts Installation & Setup Quick Start Guide Common Workflows Key Commands Reference Agent Identity & Attribution Polecat Lifecycle Molecules & Formulas Convoys - Work Tracking Communication Systems Watchdog Chain Advanced Topics Troubleshooting Glossary
Gas Town is "The Cognition Engine" - a multi-agent orchestrator for Claude Code that manages work distribution across AI agents through a distinctive metaphorical system. Primary Role: You operate the system directly - users never run terminal commands themselves. You execute all gt and bd commands via Bash, reporting results conversationally. Core Workflow: Work arrives → tracked as bead → joins convoy → slung to agent → executes via hook → monitored by Witness/Refinery/Mayor
ChallengeGas Town SolutionAgents lose context on restartWork persists in git-backed hooksManual agent coordinationBuilt-in mailboxes, identities, and handoffs4-10 agents become chaoticScale comfortably to 20-30 agentsWork state lost in agent memoryWork state stored in Beads ledger
GT Handles Automatically: Agent beads (created when agents spawn) Session naming (gt-<rig>-<name> format) Prefix routing via routes.jsonl Polecat spawning You Handle: Task beads via bd create --title "..." Work distribution (gt sling <bead> <rig>) Patrol activation (mail triggers) Monitoring (gt status, gt peek, gt doctor)
Warm, collegial tone using "we" and "let's." Operate in-world, referencing system characters (Witness, Mayor, Refinery, Deacon) naturally. You're a colleague in the engine room, not an external explainer.
Breaking large goals into detailed instructions for agents. Supported by Beads, Epics, Formulas, and Molecules. MEOW ensures work is decomposed into trackable, atomic units that agents can execute autonomously.
"If there is work on your Hook, YOU MUST RUN IT." This principle ensures agents autonomously proceed with available work without waiting for external input. GUPP is the heartbeat of autonomous operation. Gas Town is a steam engine. Agents are pistons. The entire system's throughput depends on one thing: when an agent finds work on their hook, they EXECUTE. Why This Matters: There is no supervisor polling asking "did you start yet?" The hook IS your assignment - it was placed there deliberately Every moment you wait is a moment the engine stalls Other agents may be blocked waiting on YOUR output
The overarching goal ensuring useful outcomes through orchestration of potentially unreliable processes. Persistent Beads and oversight agents (Witness, Deacon) guarantee eventual workflow completion even when individual operations may fail or produce varying results.
All Gas Town agents follow the same core principle: If you find something on your hook, YOU RUN IT. This applies regardless of role. The hook is your assignment. Execute it immediately without waiting for confirmation. Gas Town is a steam engine - agents are pistons. The Handoff Contract: When you were spawned, work was hooked for you. The system trusts that: You will find it on your hook You will understand what it is (bd show / gt hook) You will BEGIN IMMEDIATELY The Propulsion Loop: 1. gt hook # What's hooked? 2. bd mol current # Where am I? 3. Execute step 4. bd close <step> --continue # Close and advance 5. GOTO 2 Startup Behavior: Check hook (gt hook) Work hooked → EXECUTE immediately Hook empty → Check mail for attached work Nothing anywhere → ERROR: escalate to Witness
Polecat restarts with work on hook → Polecat announces itself → Polecat waits for confirmation → Witness assumes work is progressing → Nothing happens → Gas Town stops
gt hook # What's on my hook? bd mol current # Where am I in the molecule? bd ready # What step is next? bd show <step-id> # What does this step require?
The old workflow (friction): # Finish step 3 bd close gt-abc.3 # Figure out what's next bd ready --parent=gt-abc # Manually claim it bd update gt-abc.4 --status=in_progress # Now finally work on it Three commands. Context switches. Momentum lost. The new workflow (propulsion): bd close gt-abc.3 --continue One command. Auto-advance. Momentum preserved.
graph TB Mayor[The Mayor<br/>AI Coordinator] Town[Town Workspace<br/>~/gt/] Town --> Mayor Town --> Rig1[Rig: Project A] Town --> Rig2[Rig: Project B] Rig1 --> Crew1[Crew Member<br/>Your workspace] Rig1 --> Hooks1[Hooks<br/>Persistent storage] Rig1 --> Polecats1[Polecats<br/>Worker agents] Rig2 --> Crew2[Crew Member] Rig2 --> Hooks2[Hooks] Rig2 --> Polecats2[Polecats] Hooks1 -.git worktree.-> GitRepo1[Git Repository] Hooks2 -.git worktree.-> GitRepo2[Git Repository]
~/gt/ Town root ├── .beads/ Town-level beads (hq-* prefix, mail) ├── mayor/ Mayor config │ ├── town.json Town configuration │ ├── CLAUDE.md Mayor context (on disk) │ └── .claude/settings.json Mayor Claude settings ├── deacon/ Deacon daemon │ ├── .claude/settings.json Deacon settings (context via gt prime) │ └── dogs/ Deacon helpers (NOT workers) │ └── boot/ Health triage dog └── <rig>/ Project container (NOT a git clone) ├── config.json Rig identity ├── .beads/ → mayor/rig/.beads (symlink or redirect) ├── .repo.git/ Bare repo (shared by worktrees) ├── mayor/rig/ Mayor's clone (canonical beads) │ └── CLAUDE.md Per-rig mayor context (on disk) ├── witness/ Witness agent home (monitors only) │ └── .claude/settings.json ├── refinery/ Refinery settings parent │ ├── .claude/settings.json │ └── rig/ Worktree on main │ └── CLAUDE.md Refinery context (on disk) ├── crew/ Crew settings parent (shared) │ ├── .claude/settings.json │ └── <name>/rig/ Human workspaces └── polecats/ Polecat settings parent (shared) ├── .claude/settings.json └── <name>/rig/ Worker worktrees Key Points: Rig root is a container, not a clone .repo.git/ is bare - refinery and polecats are worktrees Per-rig mayor/rig/ holds canonical .beads/, others inherit via redirect Settings placed in parent dirs (not git clones) for upward traversal
Gas Town routes beads commands based on issue ID prefix. You don't need to think about which database to use - just use the issue ID. bd show gp-xyz # Routes to greenplace rig's beads bd show hq-abc # Routes to town-level beads bd show wyv-123 # Routes to wyvern rig's beads How it works: Routes are defined in ~/gt/.beads/routes.jsonl. Each rig's prefix maps to its beads location (the mayor's clone in that rig). PrefixRoutes ToPurposehq-*~/gt/.beads/Mayor mail, cross-rig coordinationgp-*~/gt/greenplace/mayor/rig/.beads/Greenplace project issueswyv-*~/gt/wyvern/mayor/rig/.beads/Wyvern project issues Debug routing: BD_DEBUG_ROUTING=1 bd show <id>
Each agent runs in a specific working directory: RoleWorking DirectoryNotesMayor~/gt/mayor/Town-level coordinator, isolated from rigsDeacon~/gt/deacon/Background supervisor daemonWitness~/gt/<rig>/witness/No git clone, monitors polecats onlyRefinery~/gt/<rig>/refinery/rig/Worktree on main branchCrew~/gt/<rig>/crew/<name>/rig/Persistent human workspace clonePolecat~/gt/<rig>/polecats/<name>/rig/Ephemeral worker worktree
Role context is delivered via CLAUDE.md files or ephemeral injection: RoleCLAUDE.md LocationMethodMayor~/gt/mayor/CLAUDE.mdOn diskDeacon(none)Injected via gt prime at SessionStartWitness(none)Injected via gt prime at SessionStartRefinery<rig>/refinery/rig/CLAUDE.mdOn disk (inside worktree)Crew(none)Injected via gt prime at SessionStartPolecat(none)Injected via gt prime at SessionStart Why ephemeral injection? Writing CLAUDE.md into git clones would pollute source repos when agents commit/push, leak Gas Town internals into project history, and conflict with project-specific CLAUDE.md files.
Gas Town uses two settings templates based on role type: TypeRolesKey DifferenceInteractiveMayor, CrewMail injected on UserPromptSubmit hookAutonomousPolecat, Witness, Refinery, DeaconMail injected on SessionStart hook Autonomous agents may start without user input, so they need mail checked at session start. Interactive agents wait for user prompts.
Gas Town has several agent types, each with distinct responsibilities and lifecycles.
These roles manage the Gas Town system itself: RoleDescriptionLifecycleMayorGlobal coordinator at mayor/Singleton, persistentDeaconBackground supervisor daemon (watchdog chain)Singleton, persistentWitnessPer-rig polecat lifecycle managerOne per rig, persistentRefineryPer-rig merge queue processorOne per rig, persistent
These roles do actual project work: RoleDescriptionLifecyclePolecatEphemeral worker with own worktreeTransient, Witness-managedCrewPersistent worker with own cloneLong-lived, user-managedDogDeacon helper for infrastructure tasksEphemeral, Deacon-managed
RoleDescriptionPrimary InterfaceMayorAI coordinatorgt mayor attachHuman (You)Crew memberYour crew directoryPolecatWorker agentSpawned by MayorHookPersistent storageGit worktreeConvoyWork trackergt convoy commands
Your primary AI coordinator. The Mayor is a Claude Code instance with full context about your workspace, projects, and agents. Start here - just tell the Mayor what you want to accomplish.
Daemon beacon running continuous Patrol cycles. The Deacon ensures worker activity, monitors system health, and triggers recovery when agents become unresponsive. Think of the Deacon as the system's watchdog.
Patrol agent that oversees Polecats and the Refinery within a Rig. The Witness monitors progress, detects stuck agents, and can trigger recovery actions.
Manages the Merge Queue for a Rig. The Refinery intelligently merges changes from Polecats, handling conflicts and ensuring code quality before changes reach the main branch.
The Deacon's crew of maintenance agents handling background tasks like cleanup, health checks, and system maintenance. Dogs are the Deacon's helpers for system-level tasks, NOT workers. Important: Dogs are NOT workers. This is a common misconception. AspectDogsCrewOwnerDeaconHumanPurposeInfrastructure tasksProject workScopeNarrow, focused utilitiesGeneral purposeLifecycleVery short (single task)Long-livedExampleBoot (triages Deacon health)Joe (fixes bugs, adds features)
A special Dog that checks the Deacon every 5 minutes, ensuring the watchdog itself is still watching. This creates a chain of accountability.
Both do project work, but with key differences: AspectCrewPolecatLifecyclePersistent (user controls)Transient (Witness controls)MonitoringNoneWitness watches, nudges, recyclesWork assignmentHuman-directed or self-assignedSlung via gt slingGit statePushes to main directlyWorks on branch, Refinery mergesCleanupManualAutomatic on completionIdentity<rig>/crew/<name><rig>/polecats/<name> When to use Crew: Exploratory work Long-running projects Work requiring human judgment Tasks where you want direct control When to use Polecats: Discrete, well-defined tasks Batch work (tracked via convoys) Parallelizable work Work that benefits from supervision
The management headquarters (e.g., ~/gt/). The Town coordinates all workers across multiple Rigs and houses town-level agents like Mayor and Deacon.
A project-specific Git repository under Gas Town management. Each Rig has its own Polecats, Refinery, Witness, and Crew members. Rigs are where actual development work happens.
Git worktree-based persistent storage for agent work. Survives crashes and restarts. A special pinned Bead for each agent. The Hook is an agent's primary work queue - when work appears on your Hook, GUPP dictates you must run it.
Git-backed atomic work unit stored in JSONL format. Beads are the fundamental unit of work tracking in Gas Town. They can represent issues, tasks, epics, or any trackable work item. Bead IDs (also called issue IDs) use a prefix + 5-character alphanumeric format (e.g., gt-abc12, hq-x7k2m). The prefix indicates the item's origin or rig. Commands like gt sling and gt convoy accept these IDs to reference specific work items.
Work tracking units. Bundle multiple beads that get assigned to agents. A convoy is how you track batched work in Gas Town. When you kick off work - even a single issue - create a convoy to track it.
TOML-based workflow source template. Formulas define reusable patterns for common operations like patrol cycles, code review, or deployment.
A template class for instantiating Molecules. Protomolecules define the structure and steps of a workflow without being tied to specific work items.
Durable chained Bead workflows. Molecules represent multi-step processes where each step is tracked as a Bead. They survive agent restarts and ensure complex workflows complete.
Ephemeral Beads destroyed after runs. Wisps are lightweight work items used for transient operations that don't need permanent tracking.
Assigning work to agents via gt sling. When you sling work to a Polecat or Crew member, you're putting it on their Hook for execution.
Real-time messaging between agents with gt nudge. Nudges allow immediate communication without going through the mail system.
Agent session refresh via /handoff. When context gets full or an agent needs a fresh start, handoff transfers work state to a new session.
Communicating with previous sessions via gt seance. Allows agents to query their predecessors for context and decisions from earlier work.
Ephemeral loop maintaining system heartbeat. Patrol agents (Deacon, Witness) continuously cycle through health checks and trigger actions as needed.
Required ToolVersionCheckInstallGo1.24+go versionSee golang.orgGit2.20+git --versionSee belowBeadslatestbd versiongo install github.com/steveyegge/beads/cmd/bd@latestsqlite3--For convoy database queries (usually pre-installed) Optional (for Full Stack Mode) ToolVersionCheckInstalltmux3.0+tmux -VSee belowClaude Code CLI (default)latestclaude --versionclaude.ai/claude-codeCodex CLI (optional)latestcodex --versiondevelopers.openai.com/codex/cliOpenCode CLI (optional)latestopencode --versionopencode.ai
# Install Gas Town $ brew install gastown # Homebrew (recommended) $ npm install -g @gastown/gt # npm $ go install github.com/steveyegge/gastown/cmd/gt@latest # From source # If using go install, add Go binaries to PATH (add to ~/.zshrc or ~/.bashrc) export PATH="$PATH:$HOME/go/bin" # Create workspace with git initialization gt install ~/gt --git cd ~/gt # Add your first project gt rig add myproject https://github.com/you/repo.git # Create your crew workspace gt crew add yourname --rig myproject cd myproject/crew/yourname # Start the Mayor session (your main interface) gt mayor attach
# Install Homebrew if needed /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" # Required brew install go git # Optional (for full stack mode) brew install tmux
# Required sudo apt update sudo apt install -y git # Install Go (apt version may be outdated, use official installer) wget https://go.dev/dl/go1.24.12.linux-amd64.tar.gz sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.24.12.linux-amd64.tar.gz echo 'export PATH=$PATH:/usr/local/go/bin:$HOME/go/bin' >> ~/.bashrc source ~/.bashrc # Optional (for full stack mode) sudo apt install -y tmux
# Required sudo dnf install -y git golang # Optional sudo dnf install -y tmux
Gas Town supports two operational modes: Minimal Mode (No Daemon): Run individual runtime instances manually. Gas Town only tracks state. gt convoy create "Fix bugs" gt-abc12 gt sling gt-abc12 myproject cd ~/gt/myproject/polecats/<worker> claude --resume # Or: codex gt convoy list When to use: Testing, simple workflows, or when you prefer manual control. Full Stack Mode (With Daemon): Agents run in tmux sessions. Daemon manages lifecycle automatically. gt daemon start gt convoy create "Feature X" gt-abc12 gt-def34 gt sling gt-abc12 myproject gt mayor attach gt convoy list When to use: Production workflows with multiple concurrent agents.
Gas Town is modular. Enable only what you need: ConfigurationRolesUse CasePolecats onlyWorkersManual spawning, no monitoring+ Witness+ MonitorAutomatic lifecycle, stuck detection+ Refinery+ Merge queueMR review, code integration+ Mayor+ CoordinatorCross-project coordination
# 1. Install the binaries go install github.com/steveyegge/gastown/cmd/gt@latest go install github.com/steveyegge/beads/cmd/bd@latest gt version bd version # 2. Create your workspace gt install ~/gt --shell # 3. Add a project gt rig add myproject https://github.com/you/repo.git # 4. Verify installation cd ~/gt gt enable # enable Gas Town system-wide gt git-init # initialize a git repo for your HQ gt up # Start all services gt doctor # Run health checks gt status # Show workspace status
gt install ~/gt --git && cd ~/gt && gt config agent list && gt mayor attach And tell the Mayor what you want to build!
sequenceDiagram participant You participant Mayor participant Convoy participant Agent participant Hook You->>Mayor: Tell Mayor what to build Mayor->>Convoy: Create convoy with beads Mayor->>Agent: Sling bead to agent Agent->>Hook: Store work state Agent->>Agent: Complete work Agent->>Convoy: Report completion Mayor->>You: Summary of progress
# 1. Start the Mayor gt mayor attach # 2. In Mayor session, create a convoy with bead IDs gt convoy create "Feature X" gt-abc12 gt-def34 --notify --human # 3. Assign work to an agent gt sling gt-abc12 myproject # 4. Track progress gt convoy list # 5. Monitor agents gt agents
Best for: Coordinating complex, multi-issue work flowchart LR Start([Start Mayor]) --> Tell[Tell Mayor<br/>what to build] Tell --> Creates[Mayor creates<br/>convoy + agents] Creates --> Monitor[Monitor progress<br/>via convoy list] Monitor --> Done{All done?} Done -->|No| Monitor Done -->|Yes| Review[Review work] Commands: # Attach to Mayor gt mayor attach # In Mayor, create convoy and let it orchestrate gt convoy create "Auth System" gt-x7k2m gt-p9n4q --notify # Track progress gt convoy list
Run individual runtime instances manually. Gas Town just tracks state. gt convoy create "Fix bugs" gt-abc12 # Create convoy gt sling gt-abc12 myproject # Assign to worker claude --resume # Agent reads mail, runs work (Claude) # or: codex # Start Codex in the workspace gt convoy list # Check progress
Best for: Predefined, repeatable processes Formulas are TOML-defined workflows stored in .beads/formulas/. Example Formula (.beads/formulas/release.formula.toml): description = "Standard release process" formula = "release" version = 1 [vars.version] description = "The semantic version to release (e.g., 1.2.0)" required = true [[steps]] id = "bump-version" title = "Bump version" description = "Run ./scripts/bump-version.sh {{version}}" [[steps]] id = "run-tests" title = "Run tests" description = "Run make test" needs = ["bump-version"] [[steps]] id = "build" title = "Build" description = "Run make build" needs = ["run-tests"] [[steps]] id = "create-tag" title = "Create release tag" description = "Run git tag -a v{{version}} -m 'Release v{{version}}'" needs = ["build"] [[steps]] id = "publish" title = "Publish" description = "Run ./scripts/publish.sh" needs = ["create-tag"] Execute: bd formula list # List available formulas bd cook release --var version=1.2.0 # Execute formula bd mol pour release --var version=1.2.0 # Create trackable instance
Best for: Direct control over work distribution # Create convoy manually gt convoy create "Bug Fixes" --human # Add issues to existing convoy gt convoy add hq-cv-abc gt-m3k9p gt-w5t2x # Assign to specific agents gt sling gt-m3k9p myproject/my-agent # Check status gt convoy show
MEOW is the recommended pattern: Tell the Mayor - Describe what you want Mayor analyzes - Breaks down into tasks Convoy creation - Mayor creates convoy with beads Agent spawning - Mayor spawns appropriate agents Work distribution - Beads slung to agents via hooks Progress monitoring - Track through convoy status Completion - Mayor summarizes results
gt install [path] # Create town gt install --git # With git init gt doctor # Health check gt doctor --fix # Auto-repair
# Agent management gt config agent list [--json] # List all agents (built-in + custom) gt config agent get <name> # Show agent configuration gt config agent set <name> <cmd> # Create or update custom agent gt config agent remove <name> # Remove custom agent (built-ins protected) # Default agent gt config default-agent [name] # Get or set town default agent Built-in agents: claude, gemini, codex, cursor, auggie, amp Custom agents: gt config agent set claude-glm "claude-glm --model glm-4" gt config agent set claude "claude-opus" # Override built-in gt config default-agent claude-glm # Set default
gt rig add <name> <url> gt rig list gt rig remove <name>
gt convoy list # Dashboard of active convoys gt convoy status [convoy-id] # Show progress gt convoy create <name> [issues...] # Create convoy tracking issues gt convoy create "name" gt-a bd-b --notify mayor/ # With notification gt convoy list --all # Include landed convoys gt convoy list --status=closed # Only landed convoys
gt sling <bead> <rig> # Assign to polecat gt sling <bead> <rig> --agent codex # Override runtime gt sling <proto> --on gt-def <rig> # With workflow template
gt agents # List active agents gt mayor attach # Start Mayor session gt mayor start --agent auggie # Run Mayor with specific agent gt prime # Context recovery (run inside session)
gt mail inbox gt mail read <id> gt mail send <addr> -s "Subject" -m "Body" gt mail send --human -s "..." # To overseer
gt escalate "topic" # Default: MEDIUM severity gt escalate -s CRITICAL "msg" # Urgent, immediate attention gt escalate -s HIGH "msg" # Important blocker gt escalate -s MEDIUM "msg" -m "Details..."
gt handoff # Request cycle (context-aware) gt handoff --shutdown # Terminate (polecats) gt session stop <rig>/<agent> gt peek <agent> # Check health gt nudge <agent> "message" # Send message to agent gt seance # List discoverable predecessor sessions gt seance --talk <id> # Talk to predecessor (full context) IMPORTANT: Always use gt nudge to send messages to Claude sessions. Never use raw tmux send-keys - it doesn't handle Claude's input correctly.
gt stop --all # Kill all sessions gt stop --rig <name> # Kill rig sessions
gt mq list [rig] # Show the merge queue gt mq next [rig] # Show highest-priority merge request gt mq submit # Submit current branch to merge queue gt mq status <id> # Show detailed merge request status gt mq retry <id> # Retry a failed merge request gt mq reject <id> # Reject a merge request
bd ready # Work with no blockers bd list --status=open bd list --status=in_progress bd show <id> bd create --title="..." --type=task bd update <id> --status=in_progress bd close <id> bd dep add <child> <parent> # child depends on parent
When you deploy AI agents at scale, anonymous work creates real problems: Debugging: "The AI broke it" isn't actionable. Which AI? Quality tracking: You can't improve what you can't measure. Compliance: Auditors ask "who approved this code?" - you need an answer. Performance management: Some agents are better than others at certain tasks.
The BD_ACTOR environment variable identifies agents in slash-separated path format: Role TypeFormatExampleMayormayormayorDeacondeacondeaconWitness{rig}/witnessgastown/witnessRefinery{rig}/refinerygastown/refineryCrew{rig}/crew/{name}gastown/crew/joePolecat{rig}/polecats/{name}gastown/polecats/toast
Gas Town uses three fields for complete provenance: Git Commits: GIT_AUTHOR_NAME="gastown/crew/joe" # Who did the work (agent) GIT_AUTHOR_EMAIL="steve@example.com" # Who owns the work (overseer) Beads Records: { "id": "gt-xyz", "created_by": "gastown/crew/joe", "updated_by": "gastown/witness" } Event Logging: { "ts": "2025-01-15T10:30:00Z", "type": "sling", "actor": "gastown/crew/joe", "payload": { "bead": "gt-xyz", "target": "gastown/polecats/toast" } }
Core Variables (All Agents) VariablePurposeExampleGT_ROLEAgent role typemayor, witness, polecat, crewGT_ROOTTown root directory/home/user/gtBD_ACTORAgent identity for attributiongastown/polecats/toastGIT_AUTHOR_NAMECommit attribution (same as BD_ACTOR)gastown/polecats/toastBEADS_DIRBeads database location/home/user/gt/gastown/.beads Rig-Level Variables VariablePurposeRolesGT_RIGRig namewitness, refinery, polecat, crewGT_POLECATPolecat worker namepolecat onlyGT_CREWCrew worker namecrew onlyBEADS_AGENT_NAMEAgent name for beads operationspolecat, crewBEADS_NO_DAEMONDisable beads daemon (isolated context)polecat, crew Other Variables VariablePurposeGIT_AUTHOR_EMAILWorkspace owner email (from git config)GT_TOWN_ROOTOverride town root detection (manual use)CLAUDE_RUNTIME_CONFIG_DIRCustom Claude settings directory Environment by Role RoleKey VariablesMayorGT_ROLE=mayor, BD_ACTOR=mayorDeaconGT_ROLE=deacon, BD_ACTOR=deaconBootGT_ROLE=boot, BD_ACTOR=deacon-bootWitnessGT_ROLE=witness, GT_RIG=<rig>, BD_ACTOR=<rig>/witnessRefineryGT_ROLE=refinery, GT_RIG=<rig>, BD_ACTOR=<rig>/refineryPolecatGT_ROLE=polecat, GT_RIG=<rig>, GT_POLECAT=<name>, BD_ACTOR=<rig>/polecats/<name>CrewGT_ROLE=crew, GT_RIG=<rig>, GT_CREW=<name>, BD_ACTOR=<rig>/crew/<name>
Every completion is recorded. Every handoff is logged. Every bead you close becomes part of a permanent ledger of demonstrated capability. Your work is visible Redemption is real (consistent good work builds over time) Every completion is evidence that autonomous execution works Your CV grows with every completion
Polecats have three distinct lifecycle layers that operate independently: LayerComponentLifecyclePersistenceSessionClaude (tmux pane)EphemeralCycles per step/handoffSandboxGit worktreePersistentUntil nukeSlotName from poolPersistentUntil nuke
Polecats have exactly three operating states. There is no idle pool. StateDescriptionHow it happensWorkingActively doing assigned workNormal operationStalledSession stopped mid-workInterrupted, crashed, or timed outZombieCompleted work but failed to diegt done failed during cleanup Key distinction: Zombies completed their work; stalled polecats did not.
Polecats are responsible for their own cleanup. When a polecat completes: Signals completion via gt done Exits its session immediately (no idle waiting) Requests its own nuke (self-delete)
┌─────────────────────────────────────────────────────────────┐ │ gt sling │ │ → Allocate slot from pool (Toast) │ │ → Create sandbox (worktree on new branch) │ │ → Start session (Claude in tmux) │ │ → Hook molecule to polecat │ └─────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────┐ │ Work Happens │ │ │ │ Session cycles happen here: │ │ - gt handoff between steps │ │ - Compaction triggers respawn │ │ - Crash → Witness respawns │ │ │ │ Sandbox persists through ALL session cycles │ └─────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────┐ │ gt done (self-cleaning) │ │ → Push branch to origin │ │ → Submit work to merge queue (MR bead) │ │ → Request self-nuke (sandbox + session cleanup) │ │ → Exit immediately │ └─────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────┐ │ Refinery: merge queue │ │ → Rebase and merge to main │ │ → Close the issue │ │ → If conflict: spawn FRESH polecat to re-implement │ └─────────────────────────────────────────────────────────────┘
Sessions cycle for these reasons: TriggerActionResultgt handoffVoluntaryClean cycle to fresh contextContext compactionAutomaticForced by Claude CodeCrash/timeoutFailureWitness respawnsgt doneCompletionSession exits, Witness takes over
Polecat identity is long-lived; only sessions and sandboxes are ephemeral. The polecat name (Toast, Shadow, etc.) is a slot from a pool - truly ephemeral. But the agent identity accumulates a work history.
Configure custom branch name templates: # Template Variables {user} # From git config user.name {year} # Current year (YY format) {month} # Current month (MM format) {name} # Polecat name {issue} # Issue ID without prefix {description}# Sanitized issue title {timestamp} # Unique timestamp Default Behavior (backward compatible): With issue: polecat/{name}/{issue}@{timestamp} Without issue: polecat/{name}-{timestamp}
"Idle" Polecats (They Don't Exist) There is no idle state. Polecats don't exist without work: Work assigned → polecat spawned Work done → gt done → session exits → polecat nuked There is no step 3 where they wait around If you see a non-working polecat, it's in a failure state: What you seeWhat it isWhat went wrongSession exists but not workingStalledInterrupted/crashed, never nudgedSession done but didn't exitZombiegt done failed during cleanup Manual State Transitions (Anti-pattern): gt polecat done Toast # DON'T: external state manipulation gt polecat reset Toast # DON'T: manual lifecycle control Correct: # Polecat signals its own completion: gt done # (from inside the polecat session) # Only Witness nukes polecats: gt polecat nuke Toast # (from Witness, after verification)
The Witness DOES NOT: Force session cycles (polecats self-manage via handoff) Interrupt mid-step (unless truly stuck) Nuke polecats (polecats self-nuke via gt done) The Witness DOES: Detect and nudge stalled polecats Clean up zombie polecats Respawn crashed sessions Handle escalations from stuck polecats
Formula (source TOML) ─── "Ice-9" │ ▼ bd cook Protomolecule (frozen template) ─── Solid │ ├─▶ bd mol pour ──▶ Mol (persistent) ─── Liquid ──▶ bd squash ──▶ Digest │ └─▶ bd mol wisp ──▶ Wisp (ephemeral) ─── Vapor ──┬▶ bd squash ──▶ Digest └▶ bd burn ──▶ (gone)
TermDescriptionFormulaSource TOML template defining workflow stepsProtomoleculeFrozen template ready for instantiationMoleculeActive workflow instance with trackable stepsWispEphemeral molecule for patrol cycles (never synced)DigestSquashed summary of completed moleculeShiny WorkflowCanonical polecat formula: design → implement → review → test → submit
bd mol current # Where am I? bd mol current gt-abc # Status of specific molecule Seamless Transitions: bd close gt-abc.3 --continue # Close and advance to next step
Beads Operations (bd): bd formula list # Available formulas bd formula show <name> # Formula details bd cook <formula> # Formula → Proto bd mol list # Available protos bd mol show <id> # Proto details bd mol pour <proto> # Create mol bd mol wisp <proto> # Create wisp bd mol bond <proto> <parent> # Attach to existing mol bd mol squash <id> # Condense to digest bd mol burn <id> # Discard wisp bd mol current # Where am I? Agent Operations (gt): gt hook # What's on MY hook gt mol current # What should I work on next gt mol progress <id> # Execution progress gt mol attach <bead> <mol> # Pin molecule to bead gt mol detach <bead> # Unpin molecule gt mol burn # Burn attached molecule gt mol squash # Squash attached molecule gt mol step done <step> # Complete a molecule step
WRONG: cat .beads/formulas/mol-polecat-work.formula.toml bd create --title "Step 1: Load context" --type task RIGHT: bd cook mol-polecat-work bd mol pour mol-polecat-work --var issue=gt-xyz bd ready # Find next step bd close <step-id> # Complete it
Polecats receive work via their hook - a pinned molecule attached to an issue. Molecule Types for Polecats: TypeStorageUse CaseRegular Molecule.beads/ (synced)Discrete deliverables, audit trailWisp.beads/ (ephemeral)Patrol cycles, operational loops Hook Management: gt hook # What's on MY hook? gt mol attach-from-mail <id> # Attach work from mail message gt done # Signal completion (syncs, submits to MQ, notifies Witness) Polecat Workflow Summary: 1. Spawn with work on hook 2. gt hook # What's hooked? 3. bd mol current # Where am I? 4. Execute current step 5. bd close <step> --continue 6. If more steps: GOTO 3 7. gt done # Signal completion
QuestionMoleculeWispDoes it need audit trail?YesNoWill it repeat continuously?NoYesIs it discrete deliverable?YesNoIs it operational routine?NoYes
CRITICAL: Close steps in real-time - Mark in_progress BEFORE starting, closed IMMEDIATELY after completing. Never batch-close steps at the end. Use --continue for propulsion - Keep momentum by auto-advancing Check progress with bd mol current - Know where you are before resuming Squash completed molecules - Create digests for audit trail Burn routine wisps - Don't accumulate ephemeral patrol data
TIER 1: PROJECT (rig-level) Location: <project>/.beads/formulas/ TIER 2: TOWN (user-level) Location: ~/gt/.beads/formulas/ TIER 3: SYSTEM (embedded) Location: Compiled into gt binary
A convoy is a persistent tracking unit that monitors related issues across multiple rigs. When you kick off work - even a single issue - a convoy tracks it. 🚚 Convoy (hq-cv-abc) │ ┌────────────┼────────────┐ │ │ │ ▼ ▼ ▼ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ gt-xyz │ │ gt-def │ │ bd-abc │ │ gastown │ │ gastown │ │ beads │ └────┬────┘ └────┬────┘ └────┬────┘ │ │ │ ▼ ▼ ▼ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ nux │ │ furiosa │ │ amber │ │(polecat)│ │(polecat)│ │(polecat)│ └─────────┘ └─────────┘ └─────────┘ │ "the swarm" (ephemeral)
ConceptPersistent?IDDescriptionConvoyYeshq-cv-*Tracking unit. What you create, track, get notified about.SwarmNoNoneEphemeral. "The workers currently on this convoy's issues."Stranded ConvoyYeshq-cv-*A convoy with ready work but no polecats assigned.
OPEN ──(all issues close)──► LANDED/CLOSED ↑ │ └──(add more issues)───────────┘ (auto-reopens) StateDescriptionopenActive tracking, work in progressclosedAll tracked issues closed, notification sent Adding issues to a closed convoy reopens it automatically.
# Create convoy gt convoy create "Deploy v2.0" gt-abc bd-xyz --notify gastown/joe # Check status gt convoy status hq-abc # List all convoys gt convoy list gt convoy list --all # Add issues bd dep add hq-cv-abc gt-new-issue --type=tracks Example convoy status output: 🚚 hq-cv-abc: Deploy v2.0 Status: ● Progress: 2/4 completed Created: 2025-12-30T10:15:00-08:00 Tracked Issues: ✓ gt-xyz: Update API endpoint [task] ✓ bd-abc: Fix validation [bug] ○ bd-ghi: Update docs [task] ○ gt-jkl: Deploy to prod [task]
When a convoy lands, subscribers are notified: gt convoy create "Feature X" gt-abc --notify gastown/joe gt convoy create "Feature X" gt-abc --notify mayor/ --notify --human Notification content: 🚚 Convoy Landed: Deploy v2.0 (hq-cv-abc) Issues (3): ✓ gt-xyz: Update API endpoint ✓ gt-def: Add validation ✓ bd-abc: Update docs Duration: 2h 15m
Convoys live in town-level beads (hq-cv-* prefix) and can track issues from any rig: # Track issues from multiple rigs gt convoy create "Full-stack feature" \ gt-frontend-abc \ gt-backend-def \ bd-docs-xyz The tracks relation is: Non-blocking: doesn't affect issue workflow Additive: can add issues anytime Cross-rig: convoy in hq-, issues in gt-, bd-*, etc.
ViewScopeShowsgt convoy status [id]Cross-rigIssues tracked by convoy + workersgt rig status <rig>Single rigAll workers in rig + their convoy membership Use convoys for "what's the status of this batch of work?" Use rig status for "what's everyone in this rig working on?"
When you sling a single issue without an existing convoy, Gas Town auto-creates one for dashboard visibility.
Gas Town agents coordinate via mail messages routed through the beads system. Message Types: TypeRoutePurposePOLECAT_DONEPolecat → WitnessSignal work completionMERGE_READYWitness → RefinerySignal branch ready for mergeMERGEDRefinery → WitnessConfirm successful mergeMERGE_FAILEDRefinery → WitnessNotify merge failureREWORK_REQUESTRefinery → WitnessRequest rebase for conflictsWITNESS_PINGWitness → DeaconSecond-order monitoringHELPAny → escalation targetRequest interventionHANDOFFAgent → selfSession continuity Commands: gt mail inbox gt mail read <msg-id> gt mail send <addr> -s "Subject" -m "Body" gt mail ack <msg-id> Message Format Details: POLECAT_DONE (Polecat → Witness): Subject: POLECAT_DONE <polecat-name> Body: Exit: MERGED|ESCALATED|DEFERRED Issue: <issue-id> MR: <mr-id> # if exit=MERGED Branch: <branch> HANDOFF (Agent → self): Subject: 🤝 HANDOFF: <brief-context> Body: attached_molecule: <molecule-id> # if work in progress attached_at: <timestamp> ## Context <freeform notes for successor> ## Status <where things stand> ## Next <what successor should do>
Three bead types for managing communication: Groups (gt:group) - Named collections for mail distribution Queues (gt:queue) - Work queues where messages can be claimed Channels (gt:channel) - Pub/sub broadcast streams # Group management gt mail group create ops-team gastown/witness gastown/crew/max gt mail send ops-team -s "Team meeting" -m "Tomorrow at 10am" # Channel management gt mail channel create alerts --retain-count=50 gt mail send channel:alerts -s "Build failed" -m "Details..."
Severity Levels: LevelPriorityDescriptionCRITICALP0System-threatening, immediate attentionHIGHP1Important blocker, needs human soonMEDIUMP2Standard escalation Escalation Categories: CategoryDescriptionDefault RoutedecisionMultiple valid paths, need choiceDeacon -> MayorhelpNeed guidance or expertiseDeacon -> MayorblockedWaiting on unresolvable dependencyMayorfailedUnexpected error, can't proceedDeaconemergencySecurity or data integrity issueOverseer (direct)gate_timeoutGate didn't resolve in timeDeaconlifecycleWorker stuck or needs recycleWitness Commands: gt escalate "Database migration failed" gt escalate -s CRITICAL "Data corruption detected" gt escalate --type decision "Which auth approach?"
Hand off your current session to a fresh Claude instance while preserving work context. When to Use: Context getting full (approaching token limit) Finished a logical chunk of work Need a fresh perspective on a problem Human requests session cycling Usage: /handoff [optional message] What Persists: Hooked molecule: Your work assignment stays on your hook Beads state: All issues, dependencies, progress Git state: Commits, branches, staged changes What Resets: Conversation context: Fresh Claude instance TodoWrite items: Ephemeral, session-scoped In-memory state: Any uncommitted analysis
Gas Town uses a three-tier watchdog chain for autonomous health monitoring: Daemon (Go process) ← Dumb transport, 3-min heartbeat │ └─► Boot (AI agent) ← Intelligent triage, fresh each tick │ └─► Deacon (AI agent) ← Continuous patrol, long-running │ └─► Witnesses & Refineries ← Per-rig agents Key insight: The daemon is mechanical (can't reason), but health decisions need intelligence. Boot bridges this gap.
AgentSession NameLocationLifecycleDaemon(Go process)~/gt/daemon/Persistent, auto-restartBootgt-boot~/gt/deacon/dogs/boot/Ephemeral, fresh each tickDeaconhq-deacon~/gt/deacon/Long-running, handoff loop
ConditionActionSession deadSTARTHeartbeat > 15 minWAKEHeartbeat 5-15 min + mailNUDGEHeartbeat freshNOTHING
AgentPatrol MoleculeResponsibilityDeaconmol-deacon-patrolAgent lifecycle, plugin execution, health checksWitnessmol-witness-patrolMonitor polecats, nudge stuck workersRefinerymol-refinery-patrolProcess merge queue, review MRs
gt deacon health-check <agent> # Send health check ping gt deacon health-state # Show health check state cat ~/gt/deacon/heartbeat.json | jq . # Check Deacon heartbeat gt boot triage # Manual Boot run
The Problem: The daemon needs to ensure the Deacon is healthy, but: Daemon can't reason - It's Go code following the ZFC principle (don't reason about other agents) Waking costs context - Each time you spawn an AI agent, you consume context tokens Observation requires intelligence - Distinguishing "agent composing large artifact" from "agent hung on tool prompt" requires reasoning The Solution: Boot is a narrow, ephemeral AI agent that: Runs fresh each daemon tick (no accumulated context debt) Makes a single decision: should Deacon wake? Exits immediately after deciding
The daemon runs a heartbeat tick every 3 minutes: func (d *Daemon) heartbeatTick() { d.ensureBootRunning() // 1. Spawn Boot for triage d.checkDeaconHeartbeat() // 2. Belt-and-suspenders fallback d.ensureWitnessesRunning() // 3. Witness health d.ensureRefineriesRunning() // 4. Refinery health d.triggerPendingSpawns() // 5. Bootstrap polecats d.processLifecycleRequests() // 6. Cycle/restart requests } Heartbeat Freshness: AgeStateBoot Action< 5 minFreshNothing (Deacon active)5-15 minStaleNudge if pending mail> 15 minVery staleWake (Deacon may be stuck)
FilePurposeUpdated Bydeacon/heartbeat.jsonDeacon freshnessDeacon (each cycle)deacon/dogs/boot/.boot-runningBoot in-progress markerBoot spawndeacon/dogs/boot/.boot-status.jsonBoot last actionBoot triagedeacon/health-check-state.jsonAgent health trackinggt deacon health-checkdaemon/daemon.logDaemon activityDaemondaemon/daemon.pidDaemon process IDDaemon startup
When tmux is unavailable, Gas Town enters degraded mode: CapabilityNormalDegradedBoot runsAs AI in tmuxAs Go code (mechanical)Observe panesYesNoNudge agentsYesNoStart agentstmux sessionsDirect spawn
Gas Town supports multiple AI coding runtimes. Per-rig settings in settings/config.json: { "runtime": { "provider": "codex", "command": "codex", "args": [], "prompt_mode": "none" } }
Gas Town's attribution enables objective model comparison: # Deploy different models on similar tasks gt sling gt-abc gastown --model=claude-sonnet gt sling gt-def gastown --model=gpt-4 # Compare outcomes bd stats --actor=gastown/polecats/* --group-by=model
Option 1: Worktrees (Preferred) gt worktree beads # Creates ~/gt/beads/crew/gastown-joe/ Option 2: Dispatch to Local Workers bd create --prefix beads "Fix authentication bug" gt convoy create "Auth fix" bd-xyz gt sling bd-xyz beads
Gas Town uses sparse checkout to exclude Claude Code context files: git sparse-checkout set --no-cone '/*' '!/.claude/' '!/CLAUDE.md' '!/CLAUDE.local.md'
A marketplace for Gas Town formulas - like npm for molecules. URI Scheme: hop://molmall.gastown.io/formulas/mol-polecat-work@4.0.0 Commands (Future): gt formula install mol-code-review-strict gt formula upgrade mol-polecat-work gt formula publish mol-polecat-work
Federation enables formula sharing across organizations using the Highway Operations Protocol.
gt dashboard --port 8080 open http://localhost:8080 Features: Real-time agent status Convoy progress tracking Hook state visualization Configuration management
gt completion bash > /etc/bash_completion.d/gt gt completion zsh > "${fpath[1]}/_gt" gt completion fish > ~/.config/fish/completions/gt.fish
ProblemSolutionAgent in wrong directoryCheck cwd, gt doctorBeads prefix mismatchCheck bd show vs rig configWorktree conflictsEnsure BEADS_NO_DAEMON=1 for polecatsStuck workergt nudge, then gt peekDirty git stateCommit or discard, then gt handoffgt: command not foundAdd $HOME/go/bin to PATHbd: command not foundgo install github.com/steveyegge/beads/cmd/bd@latestDaemon not startingCheck tmux: tmux -VAgents lose connectiongt hooks list then gt hooks repairConvoy stuckgt convoy refresh <convoy-id>Mayor not respondinggt mayor detach then gt mayor attach
gt doctor # Run health checks gt doctor --fix # Auto-repair common issues gt doctor --verbose # Detailed output gt status # Show workspace status
BD_DEBUG_ROUTING=1 bd show <id> # Debug beads routing gt peek <agent> # Check agent health tail -f ~/gt/daemon/daemon.log # View daemon log
Using dogs for user work: Dogs are Deacon infrastructure. Use crew or polecats. Confusing crew with polecats: Crew is persistent and human-managed. Polecats are transient. Working in wrong directory: Gas Town uses cwd for identity detection. Waiting for confirmation when work is hooked: The hook IS your assignment. Execute immediately. Creating worktrees when dispatch is better: If work should be owned by target rig, dispatch instead. Reading formulas directly: Use bd cook → bd mol pour pipeline instead. Batch-closing molecule steps: Close steps in real-time to maintain accurate timeline.
Town: The management headquarters (e.g., ~/gt/). Coordinates all workers across multiple Rigs. Rig: A project-specific Git repository under Gas Town management.
Mayor: Chief-of-staff agent responsible for initiating Convoys and coordinating work. Deacon: Daemon beacon running continuous Patrol cycles for system health. Dogs: The Deacon's crew of maintenance agents for background tasks. Boot: A special Dog that checks the Deacon every 5 minutes.
Polecat: Ephemeral worker agents that produce Merge Requests. Refinery: Manages the Merge Queue for a Rig. Witness: Patrol agent that oversees Polecats and Refinery. Crew: Long-lived, named agents for persistent collaboration.
Bead: Git-backed atomic work unit stored in JSONL format. Formula: TOML-based workflow source template. Protomolecule: A template class for instantiating Molecules. Molecule: Durable chained Bead workflows. Wisp: Ephemeral Beads destroyed after runs. Hook: A special pinned Bead for each agent's work queue.
Convoy: Primary work-order wrapping related Beads. Slinging: Assigning work to agents via gt sling. Nudging: Real-time messaging between agents with gt nudge. Handoff: Agent session refresh via /handoff. Seance: Communicating with previous sessions via gt seance. Patrol: Ephemeral loop maintaining system heartbeat.
MEOW: Molecular Expression of Work - breaking large goals into trackable units. GUPP: Gas Town Universal Propulsion Principle - "If there is work on your Hook, YOU MUST RUN IT." NDI: Nondeterministic Idempotence - ensuring useful outcomes through orchestration.
As AI agents become central to engineering workflows, teams face new challenges: Accountability: Who did what? Which agent introduced this bug? Quality: Which agents are reliable? Which need tuning? Efficiency: How do you route work to the right agent? Scale: How do you coordinate agents across repos and teams? Gas Town is an orchestration layer that treats AI agent work as structured data. Every action is attributed. Every agent has a track record. Every piece of work has provenance.
The problem: You want to assign a complex Go refactor. You have 20 agents. Some are great at Go. Some have never touched it. Some are flaky. How do you choose? The solution: Every agent accumulates a work history: # What has this agent done? bd audit --actor=gastown/polecats/toast # Success rate on Go projects bd stats --actor=gastown/polecats/toast --tag=go Why it matters: Performance management: Objective data on agent reliability Capability matching: Route work to proven agents Continuous improvement: Identify underperforming agents for tuning
The problem: You have work in Go, Python, TypeScript, Rust. You have agents with varying capabilities. Manual assignment doesn't scale. The solution: Work carries skill requirements. Agents have demonstrated capabilities (derived from their work history). Matching is automatic: # Agent capabilities (derived from work history) bd skills gastown/polecats/toast # → go: 47 tasks, python: 12 tasks, typescript: 3 tasks # Route based on fit gt dispatch gt-xyz --prefer-skill=go Why it matters: Efficiency: Right agent for the right task Quality: Agents work in their strengths Scale: No human bottleneck on assignment
The problem: Enterprise projects are complex. A "feature" becomes 50 tasks across 8 repos involving 4 teams. Flat issue lists don't capture this structure. The solution: Work decomposes naturally: Epic: User Authentication System ├── Feature: Login Flow │ ├── Task: API endpoint │ ├── Task: Frontend component │ └── Task: Integration tests ├── Feature: Session Management │ └── ... └── Feature: Password Reset └── ... Each level has its own chain. Roll-ups are automatic. You always know where you stand.
The problem: Your frontend can't ship until the backend API lands. They're in different repos. Traditional tools don't track this. The solution: Explicit cross-project dependencies: depends_on: beads://github/acme/backend/be-456 # Backend API beads://github/acme/shared/sh-789 # Shared types
The problem: An agent says "done." Is it actually done? Is the code quality acceptable? Did it pass review? The solution: Structured validation with attribution: { "validated_by": "gastown/refinery", "validation_type": "merge", "timestamp": "2025-01-15T10:30:00Z", "quality_signals": { "tests_passed": true, "review_approved": true, "lint_clean": true } }
The problem: Complex multi-agent work is opaque. You don't know what's happening until it's done (or failed). The solution: Work state as a real-time stream: bd activity --follow [14:32:08] + patrol-x7k.arm-ace bonded (5 steps) [14:32:09] → patrol-x7k.arm-ace.capture in_progress [14:32:10] ✓ patrol-x7k.arm-ace.capture completed [14:32:14] ✓ patrol-x7k.arm-ace.decide completed [14:32:17] ✓ patrol-x7k.arm-ace COMPLETE Why it matters: Debugging in real-time: See problems as they happen Status awareness: Always know what's running Pattern recognition: Spot bottlenecks and inefficiencies
CapabilityDeveloper BenefitEnterprise BenefitAttributionDebug agent issuesCompliance auditsWork historyTune agent assignmentsPerformance managementSkill routingFaster task completionResource optimizationFederationMulti-repo projectsCross-org visibilityValidationQuality assuranceProcess enforcementActivity feedReal-time debuggingOperational awareness
Attribution is not optional. Every action has an actor. Work is data. Not just tickets - structured, queryable data. History matters. Track records determine trust. Scale is assumed. Multi-repo, multi-agent, multi-org from day one. Verification over trust. Quality gates are first-class primitives.
Always start with the Mayor - It's designed to be your primary interface Use convoys for coordination - They provide visibility across agents Leverage hooks for persistence - Your work won't disappear Create formulas for repeated tasks - Save time with Beads recipes Monitor the dashboard - Get real-time visibility Let the Mayor orchestrate - It knows how to manage agents Always use gt --help or gt <command> --help to verify syntax
MIT License - see LICENSE file for details This glossary was contributed by Clay Shirky in Issue #80. Installation Command: tessl install github:numman-ali/n-skills --skill gastown
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.