Helm Deployment
Chart Location
Section titled “Chart Location”The Helm chart lives at:
helm/synctvBy default, it can create:
- SyncTV Deployment.
- HTTP/API Service.
- gRPC Service.
- PostgreSQL.
- Redis.
- ConfigMap.
- Secret.
- Ingress.
- ServiceAccount, Role, and RoleBinding.
- Optional metrics, ServiceMonitor, VMServiceScrape, PrometheusRule, NetworkPolicy, HPA, and PDB.
Install Released Chart
Section titled “Install Released Chart”OCI registry install:
helm install synctv oci://ghcr.io/zijiren233/synctv/charts/synctv \
--version 0.1.0 \
--namespace synctv --create-namespace
The default parent OCI repository is ghcr.io/zijiren233/synctv/charts. Helm appends the chart name, so the install reference ends with /synctv. Maintainers can override the publishing target with HELM_OCI_REPOSITORY.
Traditional Helm repository install:
helm repo add synctv https://zijiren233.github.io/synctv
helm repo update
helm install synctv synctv/synctv \
--version 0.1.0 \
--namespace synctv --create-namespace
Published charts are generated by the release workflow. The source repository keeps only the chart source under helm/synctv; packaged .tgz files and the Helm repository index.yaml are generated during release. Public installs require the GHCR chart package to be public and GitHub Pages to serve the helm-charts branch.
Install From Source
Section titled “Install From Source”helm install synctv ./helm/synctv \ --namespace synctv \ --create-namespaceProduction deployments should use a values file:
helm install synctv ./helm/synctv \ --namespace synctv \ --create-namespace \ --values my-values.yamlHTTP and gRPC Services
Section titled “HTTP and gRPC Services”The SyncTV process serves HTTP REST and gRPC on the same container port. The Helm chart exposes them through separate Services:
| Service | Purpose | Port name |
|---|---|---|
synctv | HTTP/REST plus RTMP, STUN, metrics-related ports | api |
synctv-grpc | Dedicated gRPC entry | grpc |
Why split them:
- Ingress controllers usually need protocol-specific gRPC backend settings.
- Kubernetes Service/Ingress semantics differ even if the container port is the same.
- Metrics selectors can target the HTTP/API Service without accidentally scraping the gRPC Service.
Ingress
Section titled “Ingress”HTTP Ingress:
ingress: enabled: true hosts: - host: synctv.example.comgRPC Ingress is configured separately:
ingress: grpc: enabled: true hosts: - host: grpc.synctv.example.com paths: - path: / pathType: Prefix annotations: nginx.ingress.kubernetes.io/backend-protocol: "GRPC"ingress.grpc.annotations is independent from HTTP Ingress annotations.
Secrets
Section titled “Secrets”The chart can generate and store a Secret, but production should explicitly provide strong values:
secrets: jwt: secret: "replace-with-strong-secret" cluster: grpcSecret: "replace-with-cluster-secret" security: credentialEncryptionKey: "64-hex-character-key" opaqueServerSetupSecret: "stable-random-secret" bootstrap: rootPassword: "StrongRootPass12345"With an external Secret:
existingSecret: "my-external-synctv-secret"Required keys include:
SYNCTV_DATABASE_PASSWORD, unless PostgreSQL uses KubeBlocks mode.SYNCTV_REDIS_PASSWORD, unless Redis uses KubeBlocks mode.SYNCTV_JWT_SECRETSYNCTV_SERVER_CLUSTER_SECRETSYNCTV_SECURITY_CREDENTIAL_ENCRYPTION_KEYSYNCTV_SECURITY_OPAQUE_SERVER_SETUP_SECRETSYNCTV_BOOTSTRAP_ROOT_PASSWORDwhenconfig.bootstrap.createRootUser=trueSYNCTV_MANAGEMENT_AUTH_TOKENwhen management uses TCPSYNCTV_EMAIL_SMTP_USERNAMEandSYNCTV_EMAIL_SMTP_PASSWORDwhenconfig.email.smtpHostis set and SMTP authentication is requiredSYNCTV_METRICS_AUTH_BEARER_TOKENwhenmetrics.enabled=trueandmetrics.auth.mode=bearer_tokenSYNCTV_METRICS_AUTH_BASIC_USERNAMEandSYNCTV_METRICS_AUTH_BASIC_PASSWORDwhenmetrics.enabled=trueandmetrics.auth.mode=basicSYNCTV_LIVESTREAM_HLS_OSS_ACCESS_KEY_IDandSYNCTV_LIVESTREAM_HLS_OSS_SECRET_ACCESS_KEYwhenconfig.livestream.hlsStorageBackend=oss
PostgreSQL and Redis Modes
Section titled “PostgreSQL and Redis Modes”standard mode creates chart-managed StatefulSet and Service resources:
postgresql: mode: standard
redis: mode: standardkubeblocks mode creates KubeBlocks Cluster resources if KubeBlocks is installed:
postgresql: mode: kubeblocks
redis: mode: kubeblocksIn KubeBlocks mode, database credentials come from KubeBlocks-generated Secrets.
Note: the KubeBlocks Redis Sentinel component is part of the database operator topology. It does not automatically configure SyncTV as a redis.deployment_mode=sentinel client. The chart still injects a stable Redis Service endpoint into SyncTV, and SyncTV cluster mode must not be combined with SyncTV Sentinel mode.
config.dataDir
Section titled “config.dataDir”Default:
config: dataDir: "/data"The Deployment mounts /data. The default is emptyDir, which is appropriate for runtime temporary files. If runtime files must persist, set persistence.data.existingClaim.
HLS With Multiple Replicas
Section titled “HLS With Multiple Replicas”The chart does not enable cluster mode by default and does not declare HLS shared storage by default. Multi-replica HLS has two models: local backends work through publisher-node HLS proxying; shared filesystem or OSS lets any replica read segments directly and is better for high production traffic.
Local backend example:
config: cluster: enabled: true livestream: hlsStorageBackend: "memory"This does not require an HLS PVC, but playlist/segment requests on non-publisher Pods proxy back to the publisher Pod over gRPC.
Shared filesystem example:
config: cluster: enabled: true livestream: hlsStorageBackend: "file" hlsSharedStorage: true hlsStoragePath: "/var/lib/synctv/hls"
persistence: hls: existingClaim: "synctv-hls-rwx"Helm rejects these combinations during rendering:
hlsStorageBackendis notmemory,file, oross.hlsStorageBackend=filewith an emptyhlsStoragePath.hlsStorageBackend=filein Kubernetes with a non-absolutehlsStoragePath.hlsSharedStorage=truewithoutpersistence.hls.existingClaim, soemptyDircannot be mistaken for shared storage.
OSS example:
config: cluster: enabled: true livestream: hlsStorageBackend: "oss" hlsOss: endpoint: "https://s3.example.com" bucket: "synctv-hls" basePath: "synctv/hls/"
secrets: livestream: hlsOss: accessKeyId: "..." secretAccessKey: "..."Whenever config.cluster.enabled=true, application startup validation also requires Redis, a stable SYNCTV_SERVER_CLUSTER_SECRET shared by every replica, and a usable SYNCTV_SERVER_ADVERTISE_HOST for node-to-node communication. Helm defaults inject Redis connection details, generate the cluster secret, and use the Pod IP as the advertise host; preserve those conditions when trimming values or using external Secrets. If livestream HLS uses a local backend, Pod-to-Pod gRPC reachability is also required because remote segment reads depend on publisher-node proxying.
When config.cluster.discoveryMode=k8s_dns, the chart automatically renders a headless Service and injects HEADLESS_SERVICE_NAME plus POD_NAMESPACE. When config.cluster.leaderElectionMode=k8s_lease, the chart injects POD_NAME and POD_NAMESPACE. Both modes require an image built with the k8s feature.
Metrics
Section titled “Metrics”Enable metrics:
metrics: enabled: true auth: mode: bearer_tokenPrometheus Operator:
metrics: serviceMonitor: enabled: trueVictoriaMetrics:
metrics: vmServiceScrape: enabled: trueThe metrics selector should target the API component Service and avoid the gRPC Service.
If metrics.auth.mode=kubernetes is used, the SyncTV binary in the image must be compiled with the k8s feature. Helm renders RBAC, service account token settings, and scrape resources, but cannot change image compile-time features.
Render Validation
Section titled “Render Validation”Temporary checks:
helm lint ./helm/synctvhelm template synctv ./helm/synctvhelm template synctv ./helm/synctv --set ingress.grpc.enabled=trueSuccessful rendering only proves the manifests are syntactically valid. You still need runtime config validation and startup logs.