relayd agent

The Go agent owns the real deploy boundary: auth, state, workspaces, builds, and container swaps.

main.go already exposes a fairly complete control plane. It handles token and cookie auth, Unix socket access, sync sessions, deploy history, services, plugins, webhook-driven deploys, and the persistent runtime state behind the dashboard.

Common env
RELAY_ADDR=:8080
RELAY_DATA_DIR=./data
RELAY_SOCKET=./data/relay.sock
RELAY_CORS_ORIGINS=https://dashboard.example.com
RELAY_ENABLE_PLUGIN_MUTATIONS=false
RELAY_BASE_DOMAIN=preview.example.com
RELAY_DASHBOARD_HOST=admin.preview.example.com
Environment model

Most server knobs are about exposure, persistence, and rollout policy.

The agent stays fairly small on purpose. It uses environment variables for listen addresses, auth behavior, preview host generation, dashboard host routing, upload quotas, rollout timing, and language image defaults.

Core process settings

RELAY_ADDR controls the TCP listener, RELAY_DATA_DIR controls the persistent state root, and RELAY_SOCKET controls the local Unix socket path.

Version endpoint

GET /api/version reports relayd build metadata and station runtime version details so operators can spot stale binaries quickly.

Auth and browser policy

RELAY_TOKEN seeds API auth, RELAY_CORS_ORIGINS gates browser origins, and the dashboard stores the token in an HttpOnly relay_session cookie.

Mutation and webhook policy

RELAY_ENABLE_PLUGIN_MUTATIONS stays false by default, while RELAY_GITHUB_WEBHOOK_SECRET acts as the global webhook fallback when no per-app secret matches.

Preview and rollout behavior

RELAY_BASE_DOMAIN helps derive public preview hosts, RELAY_DASHBOARD_HOST gives the admin its own hostname behind the global proxy, and rollout timing is controlled by RELAY_ROLLOUT_READY_TIMEOUT_SECONDS and RELAY_ROLLOUT_DRAIN_SECONDS.

Upload limits

RELAY_MAX_UPLOAD_BYTES caps the size of any single file accepted by the sync protocol. Unset by default, so the limit is whatever the OS allows.

Auth surface

The agent accepts header auth, cookie auth, and local socket auth.

The request path is intentionally explicit. relayd checks Authorization: Bearer first, then X-Relay-Token, then the dashboard cookie, and finally a token query parameter for streamed logs.

Bearer token

Authorization: Bearer <token> is accepted on the HTTP API.

Header token

X-Relay-Token is what relay.js sends for HTTP transport.

Dashboard cookie

POST /api/auth/session sets an HttpOnly relay_session cookie for the browser UI.

Socket auth

Requests over relay.sock are treated as local auth and rely on socket file permissions instead of a token.

Same-origin checks

State-changing browser requests require same-origin unless the origin is explicitly allowed by RELAY_CORS_ORIGINS.

Allowed headers

CORS allow headers include Authorization, Content-Type, and X-Relay-Token.

Auth endpoints
GET    /api/auth/session
POST   /api/auth/session
DELETE /api/auth/session

GET    /api/version

Authorization: Bearer <token>
X-Relay-Token: <token>
Operational shape

The data directory is still the center of gravity.

Relay keeps deploy history and runtime state boring on purpose. If you need to answer what happened, you can usually do it with files in data/ and a SQLite shell.

relay.db

Deploy records, app_state, sync sessions, app secrets, and project_services.

token.txt

Persisted API token when RELAY_TOKEN is not supplied explicitly.

logs/

Per-deploy log files that back relay logs and the dashboard.

workspaces/

Per app/env/branch repo and staging directories used by the sync protocol.

plugins/buildpacks/

Installed JSON buildpack plugins loaded ahead of built-in buildpacks.

relay.sock

Optional local API socket that mirrors the API without serving the UI.

Data layout
data/
  relay.db
  token.txt
  logs/
  workspaces/
    <app>__<env>__<branch>/
      repo/
      staging/
  plugins/
    buildpacks/
  relay.sock
Control surface

The server already exposes more than deploy and logs.

The docs now call out the config and control APIs that are already present in main.go, including companions, events, project inventory, and webhook-driven deploys.

/api/deploys

GET /api/deploys lists all deploy records. GET /api/deploys/<id> returns a single record including status, started_at, ended_at, and preview_url. POST /api/deploys/rollback queues a rollback to the previous image.

/api/apps/start | stop | restart

POST each endpoint with app, env, and branch to directly control the running container without triggering a new build.

/api/apps/config

Stores repo_url, mode, traffic_mode, host_port, service_port, public_host, engine, and webhook_secret for one app slot. engine accepts "docker" (default) or "station".

/api/server/config

Stores server-level routing settings such as base_domain and dashboard_host so the global proxy can route app hosts and the Relay admin host separately.

/api/apps/companions

Lists, creates, and deletes managed companion services. /api/apps/companions/restart restarts a named companion in place.

/api/plugins/buildpacks

GET lists installed buildpack plugins. POST installs a new one (requires RELAY_ENABLE_PLUGIN_MUTATIONS=true). DELETE /api/plugins/buildpacks/<name> removes it.

/api/events and /api/projects

Provide the dashboard with live project and deploy state via SSE plus a current inventory endpoint.

/api/webhooks/github

Matches repo_url against app_state rows, verifies GitHub signatures, and queues the same deploy path the CLI uses.

App config payload
{
  "app": "demo",
  "env": "preview",
  "branch": "main",
  "repo_url": "https://github.com/org/repo.git",
  "mode": "traefik",
  "traffic_mode": "edge",
  "host_port": 3005,
  "service_port": 3000,
  "public_host": "demo-main.preview.example.com",
  "engine": "docker",
  "webhook_secret": "super-secret"
}
Build and runtime image overrides
# Node
RELAY_NODE_IMAGE=node:22
RELAY_NODE_RUN_IMAGE=node:22-slim

# Go
RELAY_GO_IMAGE=golang:1.22
RELAY_GO_RUN_IMAGE=debian:bookworm-slim

# Python
RELAY_PY_IMAGE=python:3.12
RELAY_PY_RUN_IMAGE=python:3.12-slim

# Java
RELAY_JAVA_BUILD_IMAGE=eclipse-temurin:21-jdk
RELAY_JAVA_RUN_IMAGE=eclipse-temurin:21-jre

# .NET
RELAY_DOTNET_SDK_IMAGE=mcr.microsoft.com/dotnet/sdk:8.0
RELAY_DOTNET_ASPNET_IMAGE=mcr.microsoft.com/dotnet/aspnet:8.0

# Rust
RELAY_RUST_IMAGE=rust:1.78
RELAY_RUST_RUN_IMAGE=debian:bookworm-slim

# C/C++
RELAY_CC_IMAGE=gcc:13
RELAY_CC_RUN_IMAGE=debian:bookworm-slim

# Static (Nginx)
RELAY_NGINX_IMAGE=nginx:alpine
Runtime engine

Container builds and operations route through a pluggable backend.

ContainerRuntime covers 14 operations: build, run, remove, inspect, exec, network, volume, image, and log. Docker is the default. Vessel is available as a per-app alternative. Both backends can run on the same server at the same time.

ContainerRuntime interface

Covers RunDetached, Remove, IsRunning, ContainerIP, PublishedPort, Exec, NetworkConnect, EnsureNetwork, RemoveNetwork, RemoveVolume, Build, RemoveImage, ListImages, and LogStream.

DockerRuntime (default)

Implements all interface methods via Docker CLI. DOCKER_BUILDKIT=1 is set for builds. relayd starts with DockerRuntime unless a per-app lane is switched.

VesselRuntime (vessel)

Windows-native container backend. Set engine to "vessel" in app config. Supports exec, companion services with stable bridge IPs and host-alias routing, and managed named volumes. Other lanes stay on Docker.

Station runtime (engine=vessel)

Relay docs call this runtime "station". In app config, engine still uses "vessel" for backward compatibility.

Station constraints

Station lanes are forced to port mode and edge traffic. Blue-green slots are not supported. Exec, companion services, named volumes, and /etc/hosts-based service name resolution all work.

Engine config
POST /api/apps/config
{
  "engine": "docker"  // default
}

POST /api/apps/config
{
  "engine": "vessel"  // Windows-native
}

// vessel enforces
// mode:         "port"
// traffic_mode: "edge"
// no blue-green slots
// companions + exec + volumes: ok