Skip to content

Livestream Configuration

livestream controls RTMP push/pull, HLS segments, HTTP-FLV connections, livestream memory cache, and livestream file storage.

If you only use on-demand media providers, most values can remain at their defaults.

Livestreaming includes ingest, realtime packet distribution, FLV playback, HLS remuxing, HLS storage, and cross-node publisher registration. It is separate from on-demand media provider/proxy flows: on-demand media mostly resolves external URLs and Range proxying, while livestreaming handles data that is being produced now.

SyncTV livestream pipeline showing RTMP ingest entering StreamHub, splitting into HTTP-FLV and HLS remuxer paths, and HLS playlist/segment routes reading from memory, file, or OSS storage. SyncTV livestream pipeline showing RTMP ingest entering StreamHub, splitting into HTTP-FLV and HLS remuxer paths, and HLS playlist/segment routes reading from memory, file, or OSS storage.
Livestream data enters through RTMP and StreamHub, then splits by playback protocol: HTTP-FLV streams low-latency chunks, while HLS remuxing writes playlist and segment state to the selected HLS backend.

RTMP ingest

Publishers connect through RTMP. During authentication, the publisher is registered on the local node or shared registry.

StreamHub

Live packets are distributed in-process to FLV sessions, the HLS remuxer, and internal relay/pull logic.

HTTP-FLV

Best for low latency. Clients hold long-lived responses, and write timeouts protect the server from slow readers.

HLS

Best for broad player compatibility and shared storage/CDN-like deployments. It has higher latency but scales well across replicas.

ProtocolStrengthCostBest fit
HTTP-FLVLow latency and short server pathMany long-lived connections; slow clients need strict timeoutsInteractive livestreaming and room-synchronized viewing
HLSBroad player compatibility; playlist/segment pull model; works with shared storageHigher latency and segment storage managementMobile clients, generic players, multi-replica deployments, object storage

memory is the default. Segments exist only inside the current process. It is the simplest option for single-replica, development, and low-traffic multi-replica deployments.

Limitations: process restarts lose segments; in multi-replica deployments, non-publisher nodes must read playlist/segment data from the publisher node through the HLS gRPC proxy. It works, but it is not the preferred backend for high-traffic production HLS.

  1. A publisher connects through RTMP to any SyncTV node.
  2. After authentication, that node registers room_id/media_id -> node_id/api_address in the publisher registry. In cluster mode, the registry uses Redis.
  3. The local StreamHub receives live packets. HTTP-FLV can subscribe directly, and the HLS remuxer keeps producing segments.
  4. HLS segments are written to the selected memory, file, or oss backend.
  5. If a viewer request lands on a non-publisher node, that node uses the shared registry to locate the publisher owner. HTTP-FLV uses cross-node relay; HLS playlist/segment reads can be proxied to the publisher node through the HLS gRPC proxy.
  6. If the HLS backend is file + hls_shared_storage=true or oss, any node can also read segments directly from the shared backend, reducing publisher-node fan-out pressure.

livestream.rtmp_port default: 1935.

OBS push URLs usually look like:

rtmp://your-domain:1935/live/<stream-key>

Change the port if another RTMP service already uses 1935.

livestream.public_rtmp_host default: empty.

Set it when the address returned to streamers must be a public domain or LoadBalancer address instead of an internal Pod or node address:

livestream:
public_rtmp_host: "live.example.com"

If empty, SyncTV tries to fall back to server.advertise_host.

livestream.gop_cache_size default: 2.

GOP cache helps new viewers start playback sooner without waiting for the next keyframe.

livestream.gop_cache_max_memory_mb default: 100.

Increase it when:

  • Many live streams run concurrently.
  • Bitrate is high.
  • New viewers join frequently.

Decrease it when memory is limited.

FieldDefaultPurpose
livestream.stream_timeout_seconds300Stop idle pull streams after this duration
livestream.pull_max_retries10Maximum pull retry attempts
livestream.pull_initial_backoff_ms1000Initial retry backoff
livestream.pull_max_backoff_ms30000Maximum retry backoff

Backoff grows over time to avoid hammering unstable upstream sources.

livestream.max_flv_tag_size_bytes default: 10485760, which is 10 MiB.

This prevents abnormal streams from forcing excessive memory allocation. Do not increase it unless you know the upstream emits larger valid FLV tags.

livestream.hls_storage_backend default: memory.

Allowed values:

ValueMeaningUse case
memoryKeep HLS segments in process memorySingle replica, temporary livestreams, development
fileWrite HLS segments to livestream.hls_storage_pathSingle-replica file storage or multi-replica shared filesystem
ossWrite HLS segments to S3-compatible object storageMulti-replica Kubernetes or cross-node storage

Canonical values are memory, file, and oss. Raw YAML/env parsing also accepts filesystem as a file alias, and s3 or object_storage as oss aliases. Helm values intentionally allow only canonical values.

Cluster mode can use memory. Non-publisher nodes read playlist/segment data from the publisher node through the HLS gRPC proxy. This is acceptable for small deployments or validation environments; high-traffic production HLS should use either:

  • file with hls_storage_path mounted from a filesystem all replicas can read and write.
  • oss with livestream.hls_oss.* configured.

livestream.hls_memory_max_mb default: 0, which means the built-in default is used.

This only applies when hls_storage_backend=memory. The current built-in default is 512 MB.

livestream.hls_shared_storage default: false.

This only applies when hls_storage_backend=file. It declares whether hls_storage_path is mounted from storage visible to every replica. oss is inherently shared and does not use this flag. Setting hls_shared_storage=true with the memory or oss backend is rejected by configuration validation.

With local file-backed HLS in multi-replica deployments, non-publisher nodes use the HLS gRPC proxy to read from the publisher node. To reduce cross-node origin pressure and get cleaner failure boundaries, production multi-replica file-backed HLS should use shared storage:

livestream:
hls_storage_backend: "file"
hls_shared_storage: true
hls_storage_path: "/var/lib/synctv/hls"

The path must be shared by all replicas, for example through NFS, an RWX PVC, or a CSI volume. Do not mark Pod-local emptyDir, /tmp, or node-local disks as shared storage; if it is local, keep hls_shared_storage=false.

livestream.hls_storage_path default: empty.

This is used only when hls_storage_backend=file. Relative paths are resolved under data_dir:

data_dir: "/var/lib/synctv"
livestream:
hls_storage_backend: "file"
hls_storage_path: "livestream/hls"

Effective path:

/var/lib/synctv/livestream/hls

When hls_storage_backend=oss, SyncTV stores HLS segments in S3-compatible object storage:

livestream:
hls_storage_backend: "oss"
hls_oss:
endpoint: "https://s3.example.com"
bucket: "synctv-hls"
region: "auto"
base_path: "synctv/hls/"

Inject credentials through environment variables or secret files:

Terminal window
export SYNCTV_LIVESTREAM_HLS_OSS_ACCESS_KEY_ID="..."
export SYNCTV_LIVESTREAM_HLS_OSS_SECRET_ACCESS_KEY="..."
FieldDefaultMeaning
livestream.hls_oss.endpoint""S3/OSS endpoint such as AWS S3, MinIO, Cloudflare R2, or a compatible service
livestream.hls_oss.bucket""Bucket used for HLS segments
livestream.hls_oss.access_key_id""Access key ID; supports access_key_id_file
livestream.hls_oss.secret_access_key""Secret access key; supports secret_access_key_file
livestream.hls_oss.regionnullS3 region; leave empty or use the provider-required value for compatible services
livestream.hls_oss.base_pathhls/Object key prefix inside the bucket; normalized without a leading / and with a trailing /

livestream.flv_max_connection_duration_seconds default: 86400.

This limits a single HTTP-FLV connection to 24 hours. Setting it to 0 disables the limit, but that is not recommended for production.

livestream.flv_write_timeout_seconds default: 30.

If a client is too slow and writes remain blocked beyond this timeout, SyncTV disconnects the client to protect server resources.