The Trimo approach: local execution in Docker sandboxes
How Trimo lets teams leverage existing developer machines to run AI coding agents in parallel — safely isolated in Docker containers.
The problem with AI coding agents today
AI coding agents are getting good. Claude Code can scaffold an entire feature. Codex can write and test code autonomously. Cursor can refactor across files. But there's a gap between what these agents can do in a single session and what it takes to actually ship software with them.
The gap is orchestration. When you want to run multiple agents in parallel — each working on a different feature, each on its own branch — you hit real problems:
- Where do they run? On your host machine, with access to everything? In the cloud, where you pay per minute and lose control of your code?
- How do you monitor them? Multiple terminal windows? Switching between tabs?
- How do you review their work? Git diff in the terminal? Spin up a dev server manually for each branch?
- How do you course-correct? Kill the process? Hope it reads your next message?
Trimo exists to solve these problems. Not by building yet another agent, but by building the orchestration layer that existing agents need.
Cloud-control, local-execution
Trimo's architecture has two parts:
- A cloud dashboard — the cockpit where you see all your pipelines, review diffs, write prompts, and monitor progress.
- A local daemon — a lightweight background process on your machine that receives commands from the dashboard and manages Docker containers.
The cloud handles the UI and state. Your machine handles the execution. This means:
- No cloud compute costs. Agents run on hardware you already own.
- Your code stays local. Source code never leaves your machine. The cloud only sees metadata — pipeline status, run events, prompts.
- Full Docker isolation. Each agent runs in its own container with its own filesystem, network, and process space. Nothing leaks to the host.
- Parallel execution is natural. Your machine can run multiple containers simultaneously. Each one is isolated from the others.
Why Docker isolation matters
Most coding agents today run directly on the host machine. They read your files, write your files, and execute commands in your shell. This is fine for interactive use — you're watching, you can ctrl-C. But for autonomous runs where agents operate independently, running on the host is a liability.
What can go wrong without isolation
- Conflicting changes. Two agents editing the same file on the same filesystem. One overwrites the other's work.
- Environment pollution. An agent installs a dependency globally, changes a config file, or modifies your shell profile. Now your machine is in a state you didn't expect.
- Port conflicts. Two agents try to start dev servers on the same port.
- Runaway processes. An autonomous agent spawns a process that doesn't stop. Without container limits, it eats your resources.
- Security exposure. An agent with host access can read your SSH keys, environment variables, credentials files, and anything else on your machine.
What Docker gives you
Docker containers provide real, kernel-level isolation:
- Filesystem isolation. Each container has its own root filesystem. Agents can write anywhere inside the container without affecting your host or other containers.
- Process isolation. Processes in a container can't see or interact with processes outside it. A runaway agent is confined to its container.
- Network isolation. Each container has its own network namespace. No port conflicts. No accidental access to host services.
- Resource limits. CPU and memory limits prevent any single agent from starving the rest of your system.
- Clean teardown. Stop the container, and everything the agent did is gone. No residue on your host.
This isn't theoretical. This is the same isolation that production systems have relied on for over a decade. Docker containers use Linux namespaces and cgroups — the same primitives that power every major cloud platform.
How Trimo uses containers
When you trigger a run in Trimo, here's what happens:
- The daemon starts a container from the base image, configured for your project.
- The container clones your repo and checks out the pipeline's branch.
- The agent runs inside the container with access to the codebase, git, and the tools it needs — but nothing else.
- Git operations are managed. The container includes safety wrappers around git and the GitHub CLI that enforce rules — auto-push on commit, branch protection, no force pushes.
- Events stream back to the dashboard through the daemon, so you can monitor progress in real time.
- When the run completes, the code is committed and pushed. The container can be recycled or stopped.
Each pipeline gets its own branch. Each run in a pipeline builds on the previous run's commits. The result is a clean, linear git history that tells the story of how a feature was built — one prompt at a time.
Parallel execution on your hardware
A modern developer machine — even a laptop — can comfortably run several Docker containers in parallel. A Mac Mini with 32GB of RAM can handle a handful of agents working on different features simultaneously.
This is the key insight: most developer machines are underutilized. When you're reviewing code, writing a prompt, or in a meeting, your CPU and memory are sitting idle. Trimo lets you put those resources to work.
For teams that want to scale further, multiple machines can each run a daemon connected to the same cloud dashboard. A rack of Mac Minis becomes a fleet of agent execution nodes — all managed from one place, with no cloud compute bills.
This stands in contrast to cloud sandbox approaches like Codex, where every agent session adds to your compute bill. Other local-execution tools like Conductor share the philosophy of running on your hardware, but differ in isolation model and orchestration capabilities.
The framework-agnostic approach
Trimo doesn't build agents. It orchestrates them. The base image is designed to work with any agent that can run in a terminal:
- Claude Code — runs natively in the container
- Codex CLI — same approach
- Local models — if it runs in a shell, it works
- Custom scripts — your own automation, wrapped in Trimo's lifecycle
You bring your own LLM subscription. Trimo doesn't sell tokens, mark up API calls, or sit between you and your model provider. Your prompts stay in your workspace. They're never aggregated or used for training.
What the workflow looks like
Trimo's core loop has six stages:
- Discover — identify work to be done, either manually or through automated sources.
- Dispatch — write a prompt, pick an agent, trigger a run. Do this for multiple features at once.
- Monitor — see all your pipelines on a dashboard. Which are running, which need attention, which are done.
- Review — inspect diffs, spin up a QA environment to test changes live in the browser.
- Intervene — spot an issue? Write a corrective prompt. The agent runs again with full context.
- Continue — the next prompt in the pipeline builds on everything before it. The cycle repeats.
This loop isn't linear. At any moment, you might be monitoring three pipelines, reviewing one, and writing a prompt for another. The dashboard externalizes the cognitive load so you can manage it all without losing context.
Git as infrastructure
In Trimo, git isn't an afterthought — it's baked into every layer:
- Auto-commit and push. The agent's work is committed and pushed automatically. Work is never lost, even if the container crashes.
- Branch continuity. Each pipeline maintains a branch. Runs accumulate commits on that branch, building a linear history of the feature.
- PR creation. When a pipeline is complete, a pull request is opened on GitHub.
- Safety wrappers. Git operations inside the container go through wrappers that enforce rules — no force pushes, no pushes to protected branches, automatic push after every commit.
The tradeoffs
No architecture is free of tradeoffs. Here's what you give up with local execution:
- Your machine needs to be on. Unlike cloud sandboxes, Trimo agents run on your hardware. If your machine sleeps, agents pause.
- Docker is required. You need Docker Desktop or Docker Engine installed. This is the only external dependency.
- Compute is bounded by your hardware. You can run as many agents as your machine can handle, but that number is finite. For most teams, this is plenty. For massive parallelism, you add more machines.
We think these tradeoffs are worth it. No cloud compute bills. No code leaving your machine. Real isolation. And the ability to leverage hardware that's already sitting on your desk.
Getting started
Trimo is built around a simple setup: install the CLI, start the daemon, and open the dashboard. The only prerequisite is Docker. You bring your own agent and LLM subscription.
The free tier covers what a solo developer needs to get started — pipelines, runs, pipeline continuity, diff views, and git-native workflow. Pro unlocks unlimited daemons, unlimited parallel runs, and advanced features. Team and Enterprise tiers add multi-user collaboration, shared workspaces, and prompt intelligence.