Code Scout

Free

AI coding got expensive fast.

Subscriptions and API bills add up. Most stacks route your repo through someone else's cloud.

Code Scout flips that.

Your code stays on your machine. Cloud providers only see what you send when you add your own keys.

Time remaining

Days
Hours
Minutes
Seconds
Join waitlist
Mac beta (Apple Silicon). Request access on the waitlist.
Discord — early builds & feedback

How we're different

A serious loop for small models

Running a local model means working in a narrow context window. Code Scout separates planning from execution so each role stays within a smaller, focused context.

Role 1

Orchestrator

Strategy, tool selection, and step orchestration, local or hosted, depending on how you configure it.

Role 2

Coder

Runs against your repo, typically the local model you point at for execution.

Orchestrator → Tools → Coder → Feedback

Execution loop

Every step emits real output (logs, diffs, test results) that becomes input for the next move.

RunInspectFixRepeat
  • Small context

    Only carry forward what the next step needs.

  • Grounded runs

    Tool output and files, not vague prose.

  • Stable on small models

    A tight loop beats a giant blob of state.

Everything you need in a local agent workbench

From shell, git, and repo context to benchmarking local and cloud models, all in one native Mac app.

Real tool access

Real tool access

Shell, files, and git in the same workbench, so the agent can act instead of just suggesting.

Repo-aware context (.codescout)

Repo-aware context (.codescout)

Project structure, conventions, and notes land in plain files the model can reuse.

Benchmark local & cloud models

Benchmark local & cloud models

Run the same prompts and tool loop against local weights and hosted endpoints to compare latency, output quality, and cost, then choose the right stack per task.

How it works

One loop, with a visible heartbeat

Local runs and tool round-trips can be slow or quiet. A heartbeat shows when the agent is running a tool, streaming output, or waiting on the model — so you're never staring at a frozen panel wondering whether it's thinking or hung.

Step 1

Planner

Plans steps and picks tools — local or hosted, depending on your models and keys.

Hosted

Ollama · OpenRouter · other

steps · tool choice

Infrastructure

Native shell, not a full browser stack

Tauri 2ViteReactRust

Code Scout is a native Mac app (Tauri 2 + Rust): direct filesystem and shell access, fast to launch, not a browser tab wrapped in Electron.

That's why it feels like a real app, not a browser tab.

Beta

At a glance

Access

macOS beta (Apple Silicon builds). Free software. No app license fee. Join the list for rolling invites as capacity opens.

Local backends

Ollama, LM Studio, llama.cpp, plus any OpenAI-compatible server you point Custom API at.

Cloud, on your terms

Add keys for major providers; use OpenRouter or others when you want a hosted orchestrator or heavier models, never required for the core loop.

Community

Discord for tips, models, and early feedback.

Early access

Get early access

Code Scout is free software. One email for beta and launch updates only.

Mac beta (Apple Silicon)

This email is only for the Code Scout beta waitlist and launch updates — not newsletters or unrelated marketing.

Rather chat live? Join the Discord channel.