API and Realtime
HTTP REST, public gRPC, WebSocket, and room realtime events share the same service and permission model.
SyncTV is a single-binary service that combines HTTP APIs, public gRPC, management gRPC, realtime room state, media providers, media proxying, livestreaming, and cluster coordination. Durable state lives in PostgreSQL. Shared ephemeral state and distributed coordination use Redis. Local runtime files live under data_dir.
API and Realtime
HTTP REST, public gRPC, WebSocket, and room realtime events share the same service and permission model.
Authentication
Password, passkey/WebAuthn, email codes, OAuth2, user-level 2FA, and JWT tokens make up the login layer.
Media
Providers resolve external media, the proxy performs controlled forwarding, and slice cache stores Range slices only.
Horizontal Scaling
Multi-node deployments use Redis pub/sub, Redis Streams, discovery, and leader election for coordination.
| Surface | Default port or path | Purpose | Production guidance |
|---|---|---|---|
| HTTP REST | server.port=8080 | Client API, health checks, OpenAPI UI | Expose through a reverse proxy or Ingress |
| Public gRPC | server.port=8080 | gRPC API for clients or SDKs | Use a separate Kubernetes Service and Ingress |
| WebSocket | server.port=8080 | Room realtime events, playback sync, chat | Configure connection limits and shutdown drain |
| Management gRPC | Unix socket or management.port=50052 | CLI, administration, operational control plane | Prefer Unix socket; TCP requires a token |
| Metrics | metrics.port=9090 | Prometheus metrics | Keep private and authenticated |
| RTMP | livestream.rtmp_port=1935 | Livestream publishing | Expose only when livestreaming is used |
| STUN UDP | webrtc.stun_port=3478 | WebRTC NAT assistance | Expose only when the built-in STUN server is used |
HTTP REST and public gRPC share the main port, but Kubernetes deployments should still use separate Services and Ingresses. This lets the gRPC Ingress set nginx.ingress.kubernetes.io/backend-protocol: "GRPC" independently.
PostgreSQL is the durable system of record. Users, rooms, permissions, provider instances, user preferences, audit data, and business records live there.
Production requirements: startup runs embedded SQLx migrations automatically; maintain backup and restore procedures; keep connection pool sizing within database capacity; treat database migrations as rollback-sensitive.
Redis stores shared ephemeral state and distributed coordination data. Single-node deployments can run without Redis, but production and multi-replica deployments should configure it.
Redis affects rate limits and brute-force protection, token blacklist, OAuth2 state, WebAuthn challenges, email verification short-lived state, L2 cache, cluster pub/sub, node registration, and event catch-up.
Redis is usually not a long-term backup target, but restarting or clearing it affects in-flight authentication, rate windows, and cluster short-term state.
data_dir is the root directory for runtime-owned local files.
Common contents include the management Unix socket, file logs, HLS files, and file-backed proxy slice cache.
Static input files such as *_file secrets and TLS certificates are not rebased under data_dir.
User-Agent, Referer, Range, or authentication headers.Cluster mode is for multi-replica deployments. When cluster.enabled=true, Redis and server.cluster_secret are required.
Key settings:
| Setting | Purpose |
|---|---|
cluster.discovery_mode | Node discovery: redis, static, or k8s_dns |
cluster.leader_election_mode | Background task leader election: redis or k8s_lease |
server.advertise_host | Address other nodes use to reach this node |
server.cluster_secret | Authentication for inter-node gRPC calls |
cluster.catchup_window_secs | Redis Stream event catch-up window after short disconnects |
For multi-replica deployments, also confirm:
redis.key_prefix.server.shutdown_drain_timeout_seconds.For the full runtime design, discovery modes, leader election, and clustered livestreaming boundaries, see Cluster Configuration.
The livestream path is separate from the on-demand proxy path. RTMP is the publishing entrypoint; HTTP-FLV serves low-latency playback; HLS remuxing produces playlists and segments into the selected memory, file, or oss backend. In multi-replica deployments, the publisher owner is exposed through a shared registry; non-publisher nodes can read HLS segments through the HLS gRPC proxy, or read them directly when using a shared filesystem or OSS backend.
For the RTMP/StreamHub/HTTP-FLV/HLS pipeline, backend selection, and clustered livestreaming failure boundaries, see Livestream Configuration.
Minimal production topology:
Kubernetes multi-replica topology: