Agent setup
Agent setup
Quick vocabulary — this page configures agent types (the dialect / binary, e.g. Claude or Codex). That’s distinct from agents (the persistent identity, like
polecat-onyx) and sessions (the runtime tmux pane). See Agents, sessions & agent types for the full mental model — TL;DR: an agent identity runs a session of an agent type.
Gemba spawns sessions as panes inside a terminal multiplexer
(tmux / iTerm2 / Terminal.app). Each pane runs a CLI binary the
operator has installed; Gemba doesn’t ship any agent runtime itself.
You list the agent types available in .gemba/agents.toml —
one [[agent]] stanza per dialect — and the SPA’s “Start session”
picker reads that file at boot.
Auto-seeded. Since
gm-root.24, ratifying a project through/newwrites a default.gemba/agents.tomlwithclaudedeclared (intra-parallel, max 4). You only need to hand-edit the file when adding a second agent type or tuning the parallelism caps. The schema below is the contract; the seeded file is a minimal version of it. Fresh projects also seedCLAUDE.mdandAGENTS.mdwith the Gemba runtime contract: Beads is authoritative for work state and decisions, and source analysis defaults to GitNexus when available.
This guide covers the most common agents people drop into Gemba.
For the full schema (including the [agent.container] stanza for
sandboxed runs), see
internal/adapter/native/agents/registry.go.
For the architectural comparison across Claude, Codex, and Gas Town
session integrations, see Session runtime integration patterns.
The schema in one screen
# .gemba/agents.toml — workspace-local. Not committed by default;# each operator's machine can carry a different roster.
[[agent]]name = "claude" # operator-chosen identifier; lower-kebab-casebinary = "claude" # exec.LookPath'd on the operator's PATHargs = [] # fixed argv prefix (before any caller args)model = "claude-opus-4-7" # passed via --model when the binary accepts itpreamble = "claude_md" # how the project + epic + bead context reaches the agenthooks = "claude_code" # which gemba-bridge hook profile gets installedinteraction_mode = "balanced" # which interaction_profile.md section gets injectedintra_parallel = true # may a single session carry multiple beads at once?max_parallel = 3 # if so, what's the per-session cap?| Field | Required | Notes |
|---|---|---|
name | ✓ | Unique within the registry; the SPA’s picker label. |
binary | ✓ | Looked up via exec.LookPath. Missing binary makes the type unavailable (logged, not fatal). |
args | Empty list = no fixed prefix. | |
model | Many CLIs accept --model; some don’t. Leave blank to take the agent’s own default. | |
preamble | ✓ | claude_md / first_message / codex_exec / stdout_banner. See § Preamble below. |
hooks | ✓ | claude_code / prompt_command / none. See § Hooks below. |
interaction_mode | dangerous / balanced / cautious. Default balanced. | |
intra_parallel | Default false (one bead per session). | |
max_parallel | Required when intra_parallel = true; ignored otherwise. |
Preamble strategies
How the project + epic + bead context reaches the agent at session start. The composed markdown is the same regardless of strategy — only the delivery channel differs.
| Value | Mechanism | When to use |
|---|---|---|
claude_md | Appends a fenced block to CLAUDE.md, removed on session end. | Claude Code (it auto-reads CLAUDE.md). |
first_message | Sends the preamble as the first user prompt. | Most other agent CLIs (Aider, Codex, Cursor’s chat). |
codex_exec | Writes the preamble to a prompt file before spawn; gemba-codex-driver passes that file to codex exec --json. | OpenAI Codex CLI through the native driver. |
stdout_banner | Prints a markdown banner to the terminal. | Shell-only (no agent — operator reads it). |
Hook profiles
Hooks let the agent’s lifecycle (file edits, prompt submission,
session end) flow back to Gemba so the SPA can render session state
and correlate bd mutations to the session that made them.
| Value | Installs | Compatibility |
|---|---|---|
claude_code | .claude/settings.local.json Claude Code hook stanza. | Claude Code only. |
prompt_command | A $PROMPT_COMMAND shellrc fragment. | Bash/zsh shell sessions. |
none | Nothing — Gemba sees only spawn + exit signals. | Anything else. |
Pick none for an agent that doesn’t expose a hook surface;
sessions still work, you just don’t get the live progress badges.
Per-agent recipes
Claude Code
The first-class path. Claude Code reads CLAUDE.md automatically
and exposes a Hooks API that gemba-bridge plugs straight into.
gemba install-bridge --agent=claude also registers the gemba MCP
server in .claude/settings.local.json. The seeded CLAUDE.md tells
Claude to use Beads/Gemba for milestones, epics, beads, design
decisions, dependencies, and evidence, and to prefer GitNexus/source
analysis for code-impact questions when the index is fresh.
Install: https://claude.com/claude-code (brew install claude-code).
[[agent]]name = "claude"binary = "claude"args = []model = "claude-opus-4-7" # or claude-sonnet-4-6, claude-haiku-4-5preamble = "claude_md"hooks = "claude_code"intra_parallel = truemax_parallel = 3What you get: SessionStart preamble injection, PreToolUse safety
prompts surfaced in the SPA, automatic correlation of every bd update back to the session, live “ready / working / prompting /
stalled” pill, transcript tab.
OpenAI Codex CLI
The codex CLI from openai/codex works best in Gemba through the
native gemba-codex-driver. The driver writes the composed bead
preamble to a prompt file, runs codex exec --json non-interactively,
exposes the session-scoped gemba-mcp server to Codex, reports
fallback working / bead-done through gemba-state, and closes the
bead when Codex exits successfully.
Codex also reads AGENTS.md in the worktree. Gemba seeds that file so
Codex knows that Beads is the source of truth for project work state
and that GitNexus/source analysis should be used for impact and module
questions when configured. The session-scoped MCP config supplied by
gemba-codex-driver is the live tool channel; AGENTS.md is the
durable reminder of what those tools mean. The /onboard deterministic
setup gate also writes .Codex/settings.local.json with a gemba
MCP server entry for native Codex environments that honor workspace
settings.
Install: npm install -g @openai/codex (or per their README).
[[agent]]name = "codex"binary = "gemba-codex-driver"args = [ "--sandbox", "workspace-write", "--ask-for-approval", "never",]model = "gpt-5.4-mini"preamble = "codex_exec"hooks = "none"interaction_mode = "balanced"intra_parallel = truemax_parallel = 2Notes:
- Gemba ships
gemba-codex-driverbeside the maingembabinary. The driver invokes thecodexbinary on PATH; override with--codex-bin /path/to/codexinargsif needed. - For unattended acceptance runs,
--ask-for-approval neverprevents approval prompts from deadlocking the run. For watched manual sessions, useon-requestinstead. hooks = "none"because there is no Codex equivalent of Claude Code’s hook API. Lifecycle still appears in the SPA because the driver emitsgemba-stateframes.- Codex also receives a session-scoped MCP server named
gemba. The prompt instructs Codex to callreport_state,ask_question,raise_blocker, andemit_skill_outputfor cooperative semantic telemetry. These tool calls enrich the Status pane and escalation surfaces, but they are not a replacement for the driver’s fallback lifecycle frames. - Set
OPENAI_API_KEYin your shell env before launchinggemba serve— child sessions inherit it.
Codex MCP telemetry is cooperative: the model must choose to call the
tool. The driver remains the source of hard lifecycle guarantees
because it owns the codex exec process, timeout, failure state,
bead close, dirty-worktree commit, and final bead-done fallback.
GitHub Copilot CLI
gh copilot is a one-shot suggest/explain command, not an
interactive coding agent. Gemba’s session model assumes a long-
running pane, so Copilot CLI is a poor fit for autonomous loops.
If you want a Copilot-shaped session anyway (operator drives, agent suggests inline), the closest workable shape is to wrap it in a shell:
[[agent]]name = "copilot-shell"binary = "zsh"args = ["-l"] # interactive shell, gh copilot called manuallypreamble = "stdout_banner"hooks = "prompt_command"The operator runs gh copilot suggest "..." inside the pane as
needed; gemba-bridge correlates each command via $PROMPT_COMMAND.
For real agent-loop work, use Claude Code, Codex, or Aider instead.
Aider (any provider — OpenAI, Anthropic, local Ollama, …)
Aider is a strong “use someone else’s model” path. It supports OpenAI, Anthropic, and any OpenAI-compatible endpoint (Ollama, LiteLLM, OpenRouter), and it has an interactive REPL that takes a first-message prompt.
Install: pipx install aider-install && aider-install (or
pip install aider-chat).
OpenAI flavor (uses OPENAI_API_KEY):
[[agent]]name = "aider-openai"binary = "aider"args = ["--model", "gpt-5", "--no-auto-commits", "--yes"]preamble = "first_message"hooks = "none"Anthropic flavor (uses ANTHROPIC_API_KEY):
[[agent]]name = "aider-anthropic"binary = "aider"args = ["--model", "anthropic/claude-sonnet-4-6", "--no-auto-commits", "--yes"]preamble = "first_message"hooks = "none"Local-model flavor (Ollama, see § Ollama below):
[[agent]]name = "aider-ollama"binary = "aider"args = [ "--model", "ollama_chat/qwen2.5-coder:32b", "--no-auto-commits", "--yes",]preamble = "first_message"hooks = "none"Set OLLAMA_API_BASE=http://127.0.0.1:11434 in your shell env
before launching gemba serve so Aider’s child process inherits it.
Ollama (raw ollama run — no agent loop)
ollama run <model> is a chat REPL — useful for interactive
exploration, but it doesn’t edit files or call tools, so it’s not a
coding agent in the sense Gemba’s dispatcher assumes. If you want
agentic behavior with a local model, drive Ollama through Aider
(above) instead.
For raw inspection sessions:
[[agent]]name = "ollama-chat"binary = "ollama"args = ["run", "qwen2.5-coder:32b"]preamble = "first_message"hooks = "none"Plain shell
Always useful — a shell session that gemba-bridge correlates so
your manual bd invocations show up in the right session row.
[[agent]]name = "shell-only"binary = "zsh" # or bash, fishargs = ["-l"]preamble = "stdout_banner"hooks = "prompt_command"Cursor / Continue / VS Code
These are IDE-resident agents — Gemba’s pane-spawn model can’t
host them. Run them in your editor as usual; if you want the work
they do to land in Gemba, have them invoke bd from a terminal
inside the IDE.
A complete example
A roster covering the four agents most people end up wanting:
# Claude Code — the first-class path.[[agent]]name = "claude"binary = "claude"args = []model = "claude-opus-4-7"preamble = "claude_md"hooks = "claude_code"intra_parallel = truemax_parallel = 3
# OpenAI Codex CLI — for hosted-model OpenAI work.[[agent]]name = "codex"binary = "gemba-codex-driver"args = ["--sandbox", "workspace-write", "--ask-for-approval", "never"]model = "gpt-5.4-mini"preamble = "codex_exec"hooks = "none"interaction_mode = "balanced"intra_parallel = truemax_parallel = 2
# Aider against a local Ollama model — fully offline.[[agent]]name = "aider-ollama"binary = "aider"args = ["--model", "ollama_chat/qwen2.5-coder:32b", "--no-auto-commits", "--yes"]preamble = "first_message"hooks = "none"
# Plain shell — manual bd work, scripts, pair programming.[[agent]]name = "shell-only"binary = "zsh"args = ["-l"]preamble = "stdout_banner"hooks = "prompt_command"Verifying the registry
After editing .gemba/agents.toml, restart gemba serve. The
banner prints the agents that loaded successfully and any that
were skipped because their binary wasn’t on PATH:
agents: registered claude (binary=/opt/homebrew/bin/claude)agents: skipping codex — binary not found on PATHValidation errors (duplicate names, unknown preamble/hooks
values, missing required fields) are reported all at once so you
can fix them in one pass.
Where the agents.toml lives
- Default:
<project-root>/.gemba/agents.toml. - Override: pass
--agents-registry <path>togemba serve. - Missing: non-fatal — sessions just can’t be spawned until you drop one in. The SPA’s “Start session” picker shows an empty state with a link back to this guide.
See also
- Parallelism in Gemba —
intra_parallel/max_parallelmechanics. - Native adaptor reference — the spawn /
pane / lifecycle implementation under
internal/adapter/native.