Multi-Agent
This is where Banyan earns its keep. Your banyan.yaml doesn’t change — you add more servers, and Banyan distributes your containers across them.
Architecture
graph TD
Engine[fa:fa-server banyan-engine] -->|gRPC :50051| W1
Engine -->|gRPC :50051| W2
subgraph W1[Agent 1]
Agent1[fa:fa-cube banyan-agent]
C1{{fa:fa-box containers}}
end
subgraph W2[Agent 2]
Agent2[fa:fa-cube banyan-agent]
C2{{fa:fa-box containers}}
end
The Engine orchestrates. Agents run containers. All communication happens over gRPC with public key authentication.
Prerequisites
Install the appropriate binaries on each server. See Installation.
- Engine node:
banyan-engine,banyan-cli, etcd (managed automatically by default) - Agent nodes:
banyan-agent, containerd, nerdctl, wireguard-tools - Deploy machine:
banyan-cli(can be the engine node or any other machine)
1. Start the Engine
On your Engine server (e.g., 192.168.1.10):
sudo banyan-engine initsudo systemctl enable --now banyan-engineDuring init, Banyan generates a WireGuard keypair for the engine and creates the whitelisted keys directory at /etc/banyan/whitelisted-keys/. The wizard also asks for:
- Etcd setup — choose Managed (recommended) or External if you have your own etcd cluster.
The engine’s public key is displayed during init. Copy it — agents and CLI clients need it to set up encrypted control tunnels.
The Engine starts a gRPC server on port 50051 by default. Verify from another machine:
sudo banyan-cli init# The wizard asks for: engine host and gRPC port# It generates a WireGuard keypair and displays the public key
# Copy the CLI's public key to the enginesudo banyan-engine add-client --name deploy-machine --pubkey '<cli-public-key>'Verify the connection:
banyan-cli engineEngine================================================== Status: running Uptime: 2m CPU: 1.5% (4 cores) Memory: 0.2GB / 4.0GB Disk: 8.0GB / 50.0GB
Cluster Summary-------------------------------------------------- Agents: 0/0 connected Deployments: 0/0 running Containers: 0/0 healthy Tasks: 0 completed, 0 failed2. Add Agents
On Agent 1 (192.168.1.11):
sudo banyan-agent initsudo systemctl enable --now banyan-agentThe init wizard asks for:
- Engine host — IP or hostname of the engine server (e.g.,
192.168.1.10). - Engine gRPC port — default
50051. - Node name — unique name for this agent (default: hostname).
- Engine WireGuard public key — the engine’s public key from
banyan-engine init(optional, enables encrypted control tunnel).
During init, Banyan generates a WireGuard keypair and displays the agent’s public key. Copy this key to the engine:
# On the engine machinesudo banyan-engine add-client --name worker-1 --pubkey '<worker-1-public-key>'On Agent 2 (192.168.1.12):
sudo banyan-agent initsudo systemctl enable --now banyan-agentEach Agent connects to the Engine via gRPC, registers, and starts a heartbeat.
3. Verify the cluster
banyan-cli agentNAME STATUS CONTAINERS CPU MEM TAGS---------------------------------------------------------------------------worker-1 connected 0 1.2% 5.0%worker-2 connected 0 0.8% 4.5%4. Deploy
The same manifest from the Quickstart works here without changes. Banyan distributes replicas across agents automatically.
name: my-app
services: web: build: ./web ports: - "80:80" depends_on: - api
api: build: ./api deploy: replicas: 3 ports: - "8080:8080" environment: - DB_HOST=db.my-app.internal - DB_PORT=5432 depends_on: - db
db: image: postgres:15-alpine ports: - "5432:5432" environment: - POSTGRES_USER=banyan - POSTGRES_PASSWORD=secret - POSTGRES_DB=appbanyan-cli up -f banyan.yamlBanyan distributes 5 containers across 2 agents based on available resources — each task goes to the agent with the most available memory:
| Agent 1 | Agent 2 |
|---|---|
| my-app-web-0 | my-app-api-0 |
| my-app-api-1 | my-app-api-2 |
| my-app-db-0 |
The manifest didn’t change. You went from one server to two — same YAML, more capacity. Banyan tracks CPU, memory, and disk on every agent and makes scheduling decisions accordingly. No manual pinning needed for most workloads.
5. Check containers on agents
From the CLI:
banyan-cli containerOr SSH into an agent and list running containers directly:
sudo nerdctl psDeploying from a remote machine
You don’t need to run up from the Engine node. Any machine with banyan-cli can deploy as long as it can reach the Engine’s gRPC port:
# First configure the CLI (run once)sudo banyan-cli init# Enter the engine host and port — generates a keypair and displays the public key
# Copy the CLI's public key to the enginesudo banyan-engine add-client --name deploy-machine --pubkey '<cli-public-key>'
# Deploy from anywherebanyan-cli up -f banyan.yamlAfter a machine reboot, run sudo banyan-cli login to re-establish the WireGuard tunnel. No prompts — it reads the saved config.
Adding more agents
- Install
banyan-agent, containerd, nerdctl, and wireguard-tools on the new server. - Run
sudo banyan-agent init(enter engine host, port, and node name). - Whitelist the agent’s public key:
sudo banyan-engine add-client --name <name> --pubkey <key> - Run
sudo systemctl enable --now banyan-agent
The new agent appears in banyan-cli agent within seconds. Future deployments include it automatically.
That’s the point — scaling is adding a server, not editing a manifest.
High availability
By default, the engine is a single process. If it goes down, agents keep running containers but no new deployments can be scheduled until it’s back.
For clusters where control plane downtime is not acceptable, you can run multiple engines. See High Availability.
Firewall requirements
| Port | Protocol | Direction | Purpose |
|---|---|---|---|
| 50051 | TCP | Agents/CLI → Engine | gRPC (all control plane communication) |
| 50052 | TCP | Engine → Agents | gRPC (log streaming) |
| 5000 | TCP | Agents → Engine | OCI registry (image distribution) |
| 51820 | UDP | Agent ↔ Agent | WireGuard overlay (encrypted container traffic) |
| 51821 | UDP | Agents/CLI → Engine | WireGuard control tunnel (encrypted control plane) |
| 4789 | UDP | Agent ↔ Agent | VXLAN overlay (fallback if WireGuard unavailable) |
Agents communicate with each other over the overlay network (WireGuard or VXLAN) for cross-host container traffic. When the control tunnel is active, gRPC traffic (ports 50051/50052) flows inside the WireGuard tunnel and does not need to be exposed directly.