Skip to content

Architecture Overview

SyncTV is a single-binary service that combines HTTP APIs, public gRPC, management gRPC, realtime room state, media providers, media proxying, livestreaming, and cluster coordination. Durable state lives in PostgreSQL. Shared ephemeral state and distributed coordination use Redis. Local runtime files live under data_dir.

SyncTV architecture overview showing clients, ingress, the SyncTV single binary, PostgreSQL, Redis, external providers, and HLS storage. SyncTV architecture overview showing clients, ingress, the SyncTV single binary, PostgreSQL, Redis, external providers, and HLS storage.
SyncTV’s main boundaries: one service process hosts business APIs, realtime collaboration, the management plane, media proxying, and livestreaming. Durable state goes to PostgreSQL, and cross-node ephemeral state goes to Redis.

API and Realtime

HTTP REST, public gRPC, WebSocket, and room realtime events share the same service and permission model.

Authentication

Password, passkey/WebAuthn, email codes, OAuth2, user-level 2FA, and JWT tokens make up the login layer.

Media

Providers resolve external media, the proxy performs controlled forwarding, and slice cache stores Range slices only.

Horizontal Scaling

Multi-node deployments use Redis pub/sub, Redis Streams, discovery, and leader election for coordination.

SurfaceDefault port or pathPurposeProduction guidance
HTTP RESTserver.port=8080Client API, health checks, OpenAPI UIExpose through a reverse proxy or Ingress
Public gRPCserver.port=8080gRPC API for clients or SDKsUse a separate Kubernetes Service and Ingress
WebSocketserver.port=8080Room realtime events, playback sync, chatConfigure connection limits and shutdown drain
Management gRPCUnix socket or management.port=50052CLI, administration, operational control planePrefer Unix socket; TCP requires a token
Metricsmetrics.port=9090Prometheus metricsKeep private and authenticated
RTMPlivestream.rtmp_port=1935Livestream publishingExpose only when livestreaming is used
STUN UDPwebrtc.stun_port=3478WebRTC NAT assistanceExpose only when the built-in STUN server is used

HTTP REST and public gRPC share the main port, but Kubernetes deployments should still use separate Services and Ingresses. This lets the gRPC Ingress set nginx.ingress.kubernetes.io/backend-protocol: "GRPC" independently.

PostgreSQL is the durable system of record. Users, rooms, permissions, provider instances, user preferences, audit data, and business records live there.

Production requirements: startup runs embedded SQLx migrations automatically; maintain backup and restore procedures; keep connection pool sizing within database capacity; treat database migrations as rollback-sensitive.

  1. A client calls SyncTV through HTTP or gRPC.
  2. The API layer performs authentication, rate limiting, permission checks, and request parsing.
  3. Services read and write PostgreSQL, using Redis and L1 caches when configured.
  4. Realtime changes are pushed over WebSocket; in cluster mode, Redis distributes them to other nodes.
  1. A provider resolves a media URL and decides whether the client should fetch directly or through the proxy.
  2. The provider explicitly selects upstream headers such as User-Agent, Referer, Range, or authentication headers.
  3. The proxy layer uses provider-supplied headers only. It does not forward raw client headers by itself.
  4. Slice cache handles Range-capable upstreams only. If an upstream does not support Range, the proxy bypasses caching and does not store full-body responses.

Cluster mode is for multi-replica deployments. When cluster.enabled=true, Redis and server.cluster_secret are required.

Key settings:

SettingPurpose
cluster.discovery_modeNode discovery: redis, static, or k8s_dns
cluster.leader_election_modeBackground task leader election: redis or k8s_lease
server.advertise_hostAddress other nodes use to reach this node
server.cluster_secretAuthentication for inter-node gRPC calls
cluster.catchup_window_secsRedis Stream event catch-up window after short disconnects

For multi-replica deployments, also confirm:

  • All replicas share the same PostgreSQL database.
  • All replicas share the same Redis and redis.key_prefix.
  • HLS cross-node reads can use publisher-node proxying; high-traffic production deployments should use shared filesystem storage or OSS to reduce publisher-node pressure.
  • Kubernetes termination grace period is longer than server.shutdown_drain_timeout_seconds.

For the full runtime design, discovery modes, leader election, and clustered livestreaming boundaries, see Cluster Configuration.

The livestream path is separate from the on-demand proxy path. RTMP is the publishing entrypoint; HTTP-FLV serves low-latency playback; HLS remuxing produces playlists and segments into the selected memory, file, or oss backend. In multi-replica deployments, the publisher owner is exposed through a shared registry; non-publisher nodes can read HLS segments through the HLS gRPC proxy, or read them directly when using a shared filesystem or OSS backend.

For the RTMP/StreamHub/HTTP-FLV/HLS pipeline, backend selection, and clustered livestreaming failure boundaries, see Livestream Configuration.

Minimal production topology:

Minimal production topology showing clients reaching SyncTV through a TLS reverse proxy or Ingress, with SyncTV connected to PostgreSQL and Redis. Minimal production topology showing clients reaching SyncTV through a TLS reverse proxy or Ingress, with SyncTV connected to PostgreSQL and Redis.
A minimal production topology needs one SyncTV service entry, PostgreSQL, and Redis. Redis is optional for simple single-node deployments, but recommended in production.

Kubernetes multi-replica topology:

Kubernetes multi-replica topology showing HTTP Ingress, gRPC Ingress, separate Services, multiple SyncTV pods, PostgreSQL, Redis, HLS backend or publisher-node proxy, and external media providers. Kubernetes multi-replica topology showing HTTP Ingress, gRPC Ingress, separate Services, multiple SyncTV pods, PostgreSQL, Redis, HLS backend or publisher-node proxy, and external media providers.
Kubernetes multi-replica deployments should split HTTP and gRPC Services/Ingresses and connect every pod to the same PostgreSQL and Redis backends. HLS can use publisher-node proxying; high-traffic production should use shared filesystem storage or OSS.