RTMP ingest
Publishers connect through RTMP. During authentication, the publisher is registered on the local node or shared registry.
livestream controls RTMP push/pull, HLS segments, HTTP-FLV connections, livestream memory cache, and livestream file storage.
If you only use on-demand media providers, most values can remain at their defaults.
Livestreaming includes ingest, realtime packet distribution, FLV playback, HLS remuxing, HLS storage, and cross-node publisher registration. It is separate from on-demand media provider/proxy flows: on-demand media mostly resolves external URLs and Range proxying, while livestreaming handles data that is being produced now.
RTMP ingest
Publishers connect through RTMP. During authentication, the publisher is registered on the local node or shared registry.
StreamHub
Live packets are distributed in-process to FLV sessions, the HLS remuxer, and internal relay/pull logic.
HTTP-FLV
Best for low latency. Clients hold long-lived responses, and write timeouts protect the server from slow readers.
HLS
Best for broad player compatibility and shared storage/CDN-like deployments. It has higher latency but scales well across replicas.
| Protocol | Strength | Cost | Best fit |
|---|---|---|---|
| HTTP-FLV | Low latency and short server path | Many long-lived connections; slow clients need strict timeouts | Interactive livestreaming and room-synchronized viewing |
| HLS | Broad player compatibility; playlist/segment pull model; works with shared storage | Higher latency and segment storage management | Mobile clients, generic players, multi-replica deployments, object storage |
memory is the default. Segments exist only inside the current process. It is the simplest option for single-replica, development, and low-traffic multi-replica deployments.
Limitations: process restarts lose segments; in multi-replica deployments, non-publisher nodes must read playlist/segment data from the publisher node through the HLS gRPC proxy. It works, but it is not the preferred backend for high-traffic production HLS.
file writes segments to hls_storage_path. Single replicas can use local disk. Multi-replica deployments can use local disk with publisher-node proxying, or mount the path from a filesystem every replica can read and write.
Typical shared-filesystem choices: NFS, RWX PVCs, or CSI shared volumes. Do not mark emptyDir, /tmp, or node-local disks as shared storage.
oss stores segments in S3-compatible object storage. It gives Kubernetes and cross-node deployments an explicit shared-storage boundary.
Typical choices: AWS S3, MinIO, Cloudflare R2, or compatible services. Configure endpoint, bucket, access key, and secret key.
room_id/media_id -> node_id/api_address in the publisher registry. In cluster mode, the registry uses Redis.memory, file, or oss backend.file + hls_shared_storage=true or oss, any node can also read segments directly from the shared backend, reducing publisher-node fan-out pressure.livestream.rtmp_port default: 1935.
OBS push URLs usually look like:
rtmp://your-domain:1935/live/<stream-key>Change the port if another RTMP service already uses 1935.
livestream.public_rtmp_host default: empty.
Set it when the address returned to streamers must be a public domain or LoadBalancer address instead of an internal Pod or node address:
livestream: public_rtmp_host: "live.example.com"If empty, SyncTV tries to fall back to server.advertise_host.
livestream.gop_cache_size default: 2.
GOP cache helps new viewers start playback sooner without waiting for the next keyframe.
livestream.gop_cache_max_memory_mb default: 100.
Increase it when:
Decrease it when memory is limited.
| Field | Default | Purpose |
|---|---|---|
livestream.stream_timeout_seconds | 300 | Stop idle pull streams after this duration |
livestream.pull_max_retries | 10 | Maximum pull retry attempts |
livestream.pull_initial_backoff_ms | 1000 | Initial retry backoff |
livestream.pull_max_backoff_ms | 30000 | Maximum retry backoff |
Backoff grows over time to avoid hammering unstable upstream sources.
livestream.max_flv_tag_size_bytes default: 10485760, which is 10 MiB.
This prevents abnormal streams from forcing excessive memory allocation. Do not increase it unless you know the upstream emits larger valid FLV tags.
livestream.hls_storage_backend default: memory.
Allowed values:
| Value | Meaning | Use case |
|---|---|---|
memory | Keep HLS segments in process memory | Single replica, temporary livestreams, development |
file | Write HLS segments to livestream.hls_storage_path | Single-replica file storage or multi-replica shared filesystem |
oss | Write HLS segments to S3-compatible object storage | Multi-replica Kubernetes or cross-node storage |
Canonical values are memory, file, and oss. Raw YAML/env parsing also accepts filesystem as a file alias, and s3 or object_storage as oss aliases. Helm values intentionally allow only canonical values.
Cluster mode can use memory. Non-publisher nodes read playlist/segment data from the publisher node through the HLS gRPC proxy. This is acceptable for small deployments or validation environments; high-traffic production HLS should use either:
file with hls_storage_path mounted from a filesystem all replicas can read and write.oss with livestream.hls_oss.* configured.livestream.hls_memory_max_mb default: 0, which means the built-in default is used.
This only applies when hls_storage_backend=memory. The current built-in default is 512 MB.
livestream.hls_shared_storage default: false.
This only applies when hls_storage_backend=file. It declares whether hls_storage_path is mounted from storage visible to every replica. oss is inherently shared and does not use this flag. Setting hls_shared_storage=true with the memory or oss backend is rejected by configuration validation.
With local file-backed HLS in multi-replica deployments, non-publisher nodes use the HLS gRPC proxy to read from the publisher node. To reduce cross-node origin pressure and get cleaner failure boundaries, production multi-replica file-backed HLS should use shared storage:
livestream: hls_storage_backend: "file" hls_shared_storage: true hls_storage_path: "/var/lib/synctv/hls"The path must be shared by all replicas, for example through NFS, an RWX PVC, or a CSI volume. Do not mark Pod-local emptyDir, /tmp, or node-local disks as shared storage; if it is local, keep hls_shared_storage=false.
livestream.hls_storage_path default: empty.
This is used only when hls_storage_backend=file. Relative paths are resolved under data_dir:
data_dir: "/var/lib/synctv"livestream: hls_storage_backend: "file" hls_storage_path: "livestream/hls"Effective path:
/var/lib/synctv/livestream/hlsWhen hls_storage_backend=oss, SyncTV stores HLS segments in S3-compatible object storage:
livestream: hls_storage_backend: "oss" hls_oss: endpoint: "https://s3.example.com" bucket: "synctv-hls" region: "auto" base_path: "synctv/hls/"Inject credentials through environment variables or secret files:
export SYNCTV_LIVESTREAM_HLS_OSS_ACCESS_KEY_ID="..."export SYNCTV_LIVESTREAM_HLS_OSS_SECRET_ACCESS_KEY="..."| Field | Default | Meaning |
|---|---|---|
livestream.hls_oss.endpoint | "" | S3/OSS endpoint such as AWS S3, MinIO, Cloudflare R2, or a compatible service |
livestream.hls_oss.bucket | "" | Bucket used for HLS segments |
livestream.hls_oss.access_key_id | "" | Access key ID; supports access_key_id_file |
livestream.hls_oss.secret_access_key | "" | Secret access key; supports secret_access_key_file |
livestream.hls_oss.region | null | S3 region; leave empty or use the provider-required value for compatible services |
livestream.hls_oss.base_path | hls/ | Object key prefix inside the bucket; normalized without a leading / and with a trailing / |
livestream.flv_max_connection_duration_seconds default: 86400.
This limits a single HTTP-FLV connection to 24 hours. Setting it to 0 disables the limit, but that is not recommended for production.
livestream.flv_write_timeout_seconds default: 30.
If a client is too slow and writes remain blocked beyond this timeout, SyncTV disconnects the client to protect server resources.