Platform features

Deploy previews, rollouts, and services without giving up host control.

Relay keeps the path from local repo to running container short and inspectable, with differential sync, streamed logs, rollback-ready images, and framework-aware deploys.

Built for

Internal preview environments.

Self-hosted product teams.

Operators who care about where the container actually lives.

Core set

The core features focus on speed, visibility, and operational control.

Differential sync

Upload only changed files using workspace manifests instead of re-sending entire projects on every deploy.

Container rollouts

Build an image, start the next blue or green slot, switch traffic only after readiness passes, then drain the previous slot before cleanup.

Project services

Spin up companion Postgres, Redis, MySQL, or Mongo services from `relay.json`. On Docker lanes they share a Docker network. On Station lanes, Relay injects host aliases so the app resolves service names through `/etc/hosts` without a shared Docker network.

Framework detection

Built-in buildpacks cover mainstream app types including Sprint UI. Server-installed plugins still handle extra framework shapes when needed.

Preview routing

Run in direct port mode or through Relay's built-in edge plus Caddy proxy stack when your base domain, dashboard host, and public host settings are configured.

Pluggable runtime

Container build, run, stop, network, volume, and log operations all route through a ContainerRuntime interface. Docker is the default backend. Station is available per-app as an alternative runtime.

Operational controls

Restart, stop, stream logs, inspect deploy history, and manage app config from the same control surface.

Traffic shape

Relay ships blue-green slot swaps. It does not ship weighted canary routing.

The rollout path in relayd starts the next slot, waits for readiness, flips traffic to that slot, and drains the previous slot for a configurable window. In edge proxy mode, traffic can run as edge or session.

Blue-green slots

App containers are named per slot and alternate between blue and green on each successful rollout.

Readiness gate

Traffic only moves after the candidate slot is reachable on its service port.

Drain window

The previous slot is kept alive for RELAY_ROLLOUT_DRAIN_SECONDS before cleanup.

Edge traffic mode

traffic_mode=edge sends everyone to the active slot immediately after the switch.

Session traffic mode

traffic_mode=session sets a cookie and can keep a client pinned to either the active or standby slot during the drain window.

Not weighted canary

There is no percentage-based split, weighted routing, or automatic progressive rollout controller in the current implementation.

Operator framing

Use blue-green wording for the current Relay rollout system.

Use session-pinned standby checks when you need to verify the new slot before the old one drains away.

Avoid calling it canary unless you also explain that there is no weighted traffic split today.

Runtime support is now broader, but still explicit.

Next.js
Vite
Expo web
Sprint UI
Node
Go
.NET
Flask
FastAPI
Java
Rust
C / C++
WASM static

Production posture matters more than feature count.

Token auth is powerful enough to be considered full deploy access.

Browser access should be gated by explicit `RELAY_CORS_ORIGINS` values.

Plugin install and remove are disabled by default and require a deliberate server flag.

That is the right shape for a tool that builds and runs Docker workloads on the host.