Role 1
Orchestrator
Strategy, tool selection, and step orchestration, local or hosted, depending on how you configure it.
Code Scout
FreeSubscriptions and API bills add up. Most stacks route your repo through someone else's cloud.
Code Scout flips that.
Your code stays on your machine. Cloud providers only see what you send when you add your own keys.
Time remaining
How we're different
Running a local model means working in a narrow context window. Code Scout separates planning from execution so each role stays within a smaller, focused context.
Role 1
Strategy, tool selection, and step orchestration, local or hosted, depending on how you configure it.
Role 2
Runs against your repo, typically the local model you point at for execution.
Orchestrator → Tools → Coder → Feedback
Execution loop
Every step emits real output (logs, diffs, test results) that becomes input for the next move.
Small context
Only carry forward what the next step needs.
Grounded runs
Tool output and files, not vague prose.
Stable on small models
A tight loop beats a giant blob of state.
From shell, git, and repo context to benchmarking local and cloud models, all in one native Mac app.

Shell, files, and git in the same workbench, so the agent can act instead of just suggesting.

Project structure, conventions, and notes land in plain files the model can reuse.

Run the same prompts and tool loop against local weights and hosted endpoints to compare latency, output quality, and cost, then choose the right stack per task.
How it works
Local runs and tool round-trips can be slow or quiet. A heartbeat shows when the agent is running a tool, streaming output, or waiting on the model — so you're never staring at a frozen panel wondering whether it's thinking or hung.
Plans steps and picks tools — local or hosted, depending on your models and keys.
Ollama · OpenRouter · other
steps · tool choice
Infrastructure
Code Scout is a native Mac app (Tauri 2 + Rust): direct filesystem and shell access, fast to launch, not a browser tab wrapped in Electron.
That's why it feels like a real app, not a browser tab.
Beta
macOS beta (Apple Silicon builds). Free software. No app license fee. Join the list for rolling invites as capacity opens.
Ollama, LM Studio, llama.cpp, plus any OpenAI-compatible server you point Custom API at.
Add keys for major providers; use OpenRouter or others when you want a hosted orchestrator or heavier models, never required for the core loop.
Discord for tips, models, and early feedback.
Early access
Code Scout is free software. One email for beta and launch updates only.
Rather chat live? Join the Discord channel.