Core process settings
RELAY_ADDR controls the TCP listener, RELAY_DATA_DIR controls the persistent state root, and RELAY_SOCKET controls the local Unix socket path.
main.go already exposes a fairly complete control plane. It handles token and cookie auth, Unix socket access, sync sessions, deploy history, services, plugins, webhook-driven deploys, and the persistent runtime state behind the dashboard.
RELAY_ADDR=:8080
RELAY_DATA_DIR=./data
RELAY_SOCKET=./data/relay.sock
RELAY_CORS_ORIGINS=https://dashboard.example.com
RELAY_ENABLE_PLUGIN_MUTATIONS=false
RELAY_BASE_DOMAIN=preview.example.com
RELAY_DASHBOARD_HOST=admin.preview.example.comThe agent stays fairly small on purpose. It uses environment variables for listen addresses, auth behavior, preview host generation, dashboard host routing, upload quotas, rollout timing, and language image defaults.
RELAY_ADDR controls the TCP listener, RELAY_DATA_DIR controls the persistent state root, and RELAY_SOCKET controls the local Unix socket path.
GET /api/version reports relayd build metadata and station runtime version details so operators can spot stale binaries quickly.
RELAY_TOKEN seeds API auth, RELAY_CORS_ORIGINS gates browser origins, and the dashboard stores the token in an HttpOnly relay_session cookie.
RELAY_ENABLE_PLUGIN_MUTATIONS stays false by default, while RELAY_GITHUB_WEBHOOK_SECRET acts as the global webhook fallback when no per-app secret matches.
RELAY_BASE_DOMAIN helps derive public preview hosts, RELAY_DASHBOARD_HOST gives the admin its own hostname behind the global proxy, and rollout timing is controlled by RELAY_ROLLOUT_READY_TIMEOUT_SECONDS and RELAY_ROLLOUT_DRAIN_SECONDS.
RELAY_MAX_UPLOAD_BYTES caps the size of any single file accepted by the sync protocol. Unset by default, so the limit is whatever the OS allows.
The request path is intentionally explicit. relayd checks Authorization: Bearer first, then X-Relay-Token, then the dashboard cookie, and finally a token query parameter for streamed logs.
Authorization: Bearer <token> is accepted on the HTTP API.
X-Relay-Token is what relay.js sends for HTTP transport.
POST /api/auth/session sets an HttpOnly relay_session cookie for the browser UI.
Requests over relay.sock are treated as local auth and rely on socket file permissions instead of a token.
State-changing browser requests require same-origin unless the origin is explicitly allowed by RELAY_CORS_ORIGINS.
CORS allow headers include Authorization, Content-Type, and X-Relay-Token.
GET /api/auth/session
POST /api/auth/session
DELETE /api/auth/session
GET /api/version
Authorization: Bearer <token>
X-Relay-Token: <token>Relay keeps deploy history and runtime state boring on purpose. If you need to answer what happened, you can usually do it with files in data/ and a SQLite shell.
Deploy records, app_state, sync sessions, app secrets, and project_services.
Persisted API token when RELAY_TOKEN is not supplied explicitly.
Per-deploy log files that back relay logs and the dashboard.
Per app/env/branch repo and staging directories used by the sync protocol.
Installed JSON buildpack plugins loaded ahead of built-in buildpacks.
Optional local API socket that mirrors the API without serving the UI.
data/
relay.db
token.txt
logs/
workspaces/
<app>__<env>__<branch>/
repo/
staging/
plugins/
buildpacks/
relay.sockThe docs now call out the config and control APIs that are already present in main.go, including companions, events, project inventory, and webhook-driven deploys.
GET /api/deploys lists all deploy records. GET /api/deploys/<id> returns a single record including status, started_at, ended_at, and preview_url. POST /api/deploys/rollback queues a rollback to the previous image.
POST each endpoint with app, env, and branch to directly control the running container without triggering a new build.
Stores repo_url, mode, traffic_mode, host_port, service_port, public_host, engine, and webhook_secret for one app slot. engine accepts "docker" (default) or "station".
Stores server-level routing settings such as base_domain and dashboard_host so the global proxy can route app hosts and the Relay admin host separately.
Lists, creates, and deletes managed companion services. /api/apps/companions/restart restarts a named companion in place.
GET lists installed buildpack plugins. POST installs a new one (requires RELAY_ENABLE_PLUGIN_MUTATIONS=true). DELETE /api/plugins/buildpacks/<name> removes it.
Provide the dashboard with live project and deploy state via SSE plus a current inventory endpoint.
Matches repo_url against app_state rows, verifies GitHub signatures, and queues the same deploy path the CLI uses.
{
"app": "demo",
"env": "preview",
"branch": "main",
"repo_url": "https://github.com/org/repo.git",
"mode": "traefik",
"traffic_mode": "edge",
"host_port": 3005,
"service_port": 3000,
"public_host": "demo-main.preview.example.com",
"engine": "docker",
"webhook_secret": "super-secret"
}# Node
RELAY_NODE_IMAGE=node:22
RELAY_NODE_RUN_IMAGE=node:22-slim
# Go
RELAY_GO_IMAGE=golang:1.22
RELAY_GO_RUN_IMAGE=debian:bookworm-slim
# Python
RELAY_PY_IMAGE=python:3.12
RELAY_PY_RUN_IMAGE=python:3.12-slim
# Java
RELAY_JAVA_BUILD_IMAGE=eclipse-temurin:21-jdk
RELAY_JAVA_RUN_IMAGE=eclipse-temurin:21-jre
# .NET
RELAY_DOTNET_SDK_IMAGE=mcr.microsoft.com/dotnet/sdk:8.0
RELAY_DOTNET_ASPNET_IMAGE=mcr.microsoft.com/dotnet/aspnet:8.0
# Rust
RELAY_RUST_IMAGE=rust:1.78
RELAY_RUST_RUN_IMAGE=debian:bookworm-slim
# C/C++
RELAY_CC_IMAGE=gcc:13
RELAY_CC_RUN_IMAGE=debian:bookworm-slim
# Static (Nginx)
RELAY_NGINX_IMAGE=nginx:alpineContainerRuntime covers 14 operations: build, run, remove, inspect, exec, network, volume, image, and log. Docker is the default. Vessel is available as a per-app alternative. Both backends can run on the same server at the same time.
Covers RunDetached, Remove, IsRunning, ContainerIP, PublishedPort, Exec, NetworkConnect, EnsureNetwork, RemoveNetwork, RemoveVolume, Build, RemoveImage, ListImages, and LogStream.
Implements all interface methods via Docker CLI. DOCKER_BUILDKIT=1 is set for builds. relayd starts with DockerRuntime unless a per-app lane is switched.
Windows-native container backend. Set engine to "vessel" in app config. Supports exec, companion services with stable bridge IPs and host-alias routing, and managed named volumes. Other lanes stay on Docker.
Relay docs call this runtime "station". In app config, engine still uses "vessel" for backward compatibility.
Station lanes are forced to port mode and edge traffic. Blue-green slots are not supported. Exec, companion services, named volumes, and /etc/hosts-based service name resolution all work.
POST /api/apps/config
{
"engine": "docker" // default
}
POST /api/apps/config
{
"engine": "vessel" // Windows-native
}
// vessel enforces
// mode: "port"
// traffic_mode: "edge"
// no blue-green slots
// companions + exec + volumes: ok