Skip to content

Multi-Agent

This is where Banyan earns its keep. Your banyan.yaml doesn’t change — you add more servers, and Banyan distributes your containers across them.

Architecture

graph TD
    Engine[fa:fa-server banyan-engine] -->|gRPC :50051| W1
    Engine -->|gRPC :50051| W2

    subgraph W1[Agent 1]
        Agent1[fa:fa-cube banyan-agent]
        C1{{fa:fa-box containers}}
    end

    subgraph W2[Agent 2]
        Agent2[fa:fa-cube banyan-agent]
        C2{{fa:fa-box containers}}
    end

The Engine orchestrates. Agents run containers. All communication happens over gRPC with public key authentication.

Prerequisites

Install the appropriate binaries on each server. See Installation.

  • Engine node: banyan-engine, banyan-cli, etcd (managed automatically by default)
  • Agent nodes: banyan-agent, containerd, nerdctl, wireguard-tools
  • Deploy machine: banyan-cli (can be the engine node or any other machine)

1. Start the Engine

On your Engine server (e.g., 192.168.1.10):

Terminal window
sudo banyan-engine init
sudo systemctl enable --now banyan-engine

During init, Banyan generates a WireGuard keypair for the engine and creates the whitelisted keys directory at /etc/banyan/whitelisted-keys/. The wizard also asks for:

  • Etcd setup — choose Managed (recommended) or External if you have your own etcd cluster.

The engine’s public key is displayed during init. Copy it — agents and CLI clients need it to set up encrypted control tunnels.

The Engine starts a gRPC server on port 50051 by default. Verify from another machine:

Terminal window
sudo banyan-cli init
# The wizard asks for: engine host and gRPC port
# It generates a WireGuard keypair and displays the public key
# Copy the CLI's public key to the engine
sudo banyan-engine add-client --name deploy-machine --pubkey '<cli-public-key>'

Verify the connection:

Terminal window
banyan-cli engine
Engine
==================================================
Status: running
Uptime: 2m
CPU: 1.5% (4 cores)
Memory: 0.2GB / 4.0GB
Disk: 8.0GB / 50.0GB
Cluster Summary
--------------------------------------------------
Agents: 0/0 connected
Deployments: 0/0 running
Containers: 0/0 healthy
Tasks: 0 completed, 0 failed

2. Add Agents

On Agent 1 (192.168.1.11):

Terminal window
sudo banyan-agent init
sudo systemctl enable --now banyan-agent

The init wizard asks for:

  • Engine host — IP or hostname of the engine server (e.g., 192.168.1.10).
  • Engine gRPC port — default 50051.
  • Node name — unique name for this agent (default: hostname).
  • Engine WireGuard public key — the engine’s public key from banyan-engine init (optional, enables encrypted control tunnel).

During init, Banyan generates a WireGuard keypair and displays the agent’s public key. Copy this key to the engine:

Terminal window
# On the engine machine
sudo banyan-engine add-client --name worker-1 --pubkey '<worker-1-public-key>'

On Agent 2 (192.168.1.12):

Terminal window
sudo banyan-agent init
sudo systemctl enable --now banyan-agent

Each Agent connects to the Engine via gRPC, registers, and starts a heartbeat.

3. Verify the cluster

Terminal window
banyan-cli agent
NAME STATUS CONTAINERS CPU MEM TAGS
---------------------------------------------------------------------------
worker-1 connected 0 1.2% 5.0%
worker-2 connected 0 0.8% 4.5%

4. Deploy

The same manifest from the Quickstart works here without changes. Banyan distributes replicas across agents automatically.

name: my-app
services:
web:
build: ./web
ports:
- "80:80"
depends_on:
- api
api:
build: ./api
deploy:
replicas: 3
ports:
- "8080:8080"
environment:
- DB_HOST=db.my-app.internal
- DB_PORT=5432
depends_on:
- db
db:
image: postgres:15-alpine
ports:
- "5432:5432"
environment:
- POSTGRES_USER=banyan
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=app
Terminal window
banyan-cli up -f banyan.yaml

Banyan distributes 5 containers across 2 agents based on available resources — each task goes to the agent with the most available memory:

Agent 1Agent 2
my-app-web-0my-app-api-0
my-app-api-1my-app-api-2
my-app-db-0

The manifest didn’t change. You went from one server to two — same YAML, more capacity. Banyan tracks CPU, memory, and disk on every agent and makes scheduling decisions accordingly. No manual pinning needed for most workloads.

5. Check containers on agents

From the CLI:

Terminal window
banyan-cli container

Or SSH into an agent and list running containers directly:

Terminal window
sudo nerdctl ps

Deploying from a remote machine

You don’t need to run up from the Engine node. Any machine with banyan-cli can deploy as long as it can reach the Engine’s gRPC port:

Terminal window
# First configure the CLI (run once)
sudo banyan-cli init
# Enter the engine host and port — generates a keypair and displays the public key
# Copy the CLI's public key to the engine
sudo banyan-engine add-client --name deploy-machine --pubkey '<cli-public-key>'
# Deploy from anywhere
banyan-cli up -f banyan.yaml

After a machine reboot, run sudo banyan-cli login to re-establish the WireGuard tunnel. No prompts — it reads the saved config.

Adding more agents

  1. Install banyan-agent, containerd, nerdctl, and wireguard-tools on the new server.
  2. Run sudo banyan-agent init (enter engine host, port, and node name).
  3. Whitelist the agent’s public key: sudo banyan-engine add-client --name <name> --pubkey <key>
  4. Run sudo systemctl enable --now banyan-agent

The new agent appears in banyan-cli agent within seconds. Future deployments include it automatically.

That’s the point — scaling is adding a server, not editing a manifest.

High availability

By default, the engine is a single process. If it goes down, agents keep running containers but no new deployments can be scheduled until it’s back.

For clusters where control plane downtime is not acceptable, you can run multiple engines. See High Availability.

Firewall requirements

PortProtocolDirectionPurpose
50051TCPAgents/CLI → EnginegRPC (all control plane communication)
50052TCPEngine → AgentsgRPC (log streaming)
5000TCPAgents → EngineOCI registry (image distribution)
51820UDPAgent ↔ AgentWireGuard overlay (encrypted container traffic)
51821UDPAgents/CLI → EngineWireGuard control tunnel (encrypted control plane)
4789UDPAgent ↔ AgentVXLAN overlay (fallback if WireGuard unavailable)

Agents communicate with each other over the overlay network (WireGuard or VXLAN) for cross-host container traffic. When the control tunnel is active, gRPC traffic (ports 50051/50052) flows inside the WireGuard tunnel and does not need to be exposed directly.