Skip to content

Helm Deployment

The Helm chart lives at:

helm/synctv

By default, it can create:

  • SyncTV Deployment.
  • HTTP/API Service.
  • gRPC Service.
  • PostgreSQL.
  • Redis.
  • ConfigMap.
  • Secret.
  • Ingress.
  • ServiceAccount, Role, and RoleBinding.
  • Optional metrics, ServiceMonitor, VMServiceScrape, PrometheusRule, NetworkPolicy, HPA, and PDB.

OCI registry install:

helm install synctv oci://ghcr.io/zijiren233/synctv/charts/synctv \
--version 0.1.0 \
--namespace synctv --create-namespace

The default parent OCI repository is ghcr.io/zijiren233/synctv/charts. Helm appends the chart name, so the install reference ends with /synctv. Maintainers can override the publishing target with HELM_OCI_REPOSITORY.

Traditional Helm repository install:

helm repo add synctv https://zijiren233.github.io/synctv
helm repo update
helm install synctv synctv/synctv \
--version 0.1.0 \
--namespace synctv --create-namespace

Published charts are generated by the release workflow. The source repository keeps only the chart source under helm/synctv; packaged .tgz files and the Helm repository index.yaml are generated during release. Public installs require the GHCR chart package to be public and GitHub Pages to serve the helm-charts branch.

Terminal window
helm install synctv ./helm/synctv \
--namespace synctv \
--create-namespace

Production deployments should use a values file:

Terminal window
helm install synctv ./helm/synctv \
--namespace synctv \
--create-namespace \
--values my-values.yaml

The SyncTV process serves HTTP REST and gRPC on the same container port. The Helm chart exposes them through separate Services:

ServicePurposePort name
synctvHTTP/REST plus RTMP, STUN, metrics-related portsapi
synctv-grpcDedicated gRPC entrygrpc

Why split them:

  • Ingress controllers usually need protocol-specific gRPC backend settings.
  • Kubernetes Service/Ingress semantics differ even if the container port is the same.
  • Metrics selectors can target the HTTP/API Service without accidentally scraping the gRPC Service.

HTTP Ingress:

ingress:
enabled: true
hosts:
- host: synctv.example.com

gRPC Ingress is configured separately:

ingress:
grpc:
enabled: true
hosts:
- host: grpc.synctv.example.com
paths:
- path: /
pathType: Prefix
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"

ingress.grpc.annotations is independent from HTTP Ingress annotations.

The chart can generate and store a Secret, but production should explicitly provide strong values:

secrets:
jwt:
secret: "replace-with-strong-secret"
cluster:
grpcSecret: "replace-with-cluster-secret"
security:
credentialEncryptionKey: "64-hex-character-key"
opaqueServerSetupSecret: "stable-random-secret"
bootstrap:
rootPassword: "StrongRootPass12345"

With an external Secret:

existingSecret: "my-external-synctv-secret"

Required keys include:

  • SYNCTV_DATABASE_PASSWORD, unless PostgreSQL uses KubeBlocks mode.
  • SYNCTV_REDIS_PASSWORD, unless Redis uses KubeBlocks mode.
  • SYNCTV_JWT_SECRET
  • SYNCTV_SERVER_CLUSTER_SECRET
  • SYNCTV_SECURITY_CREDENTIAL_ENCRYPTION_KEY
  • SYNCTV_SECURITY_OPAQUE_SERVER_SETUP_SECRET
  • SYNCTV_BOOTSTRAP_ROOT_PASSWORD when config.bootstrap.createRootUser=true
  • SYNCTV_MANAGEMENT_AUTH_TOKEN when management uses TCP
  • SYNCTV_EMAIL_SMTP_USERNAME and SYNCTV_EMAIL_SMTP_PASSWORD when config.email.smtpHost is set and SMTP authentication is required
  • SYNCTV_METRICS_AUTH_BEARER_TOKEN when metrics.enabled=true and metrics.auth.mode=bearer_token
  • SYNCTV_METRICS_AUTH_BASIC_USERNAME and SYNCTV_METRICS_AUTH_BASIC_PASSWORD when metrics.enabled=true and metrics.auth.mode=basic
  • SYNCTV_LIVESTREAM_HLS_OSS_ACCESS_KEY_ID and SYNCTV_LIVESTREAM_HLS_OSS_SECRET_ACCESS_KEY when config.livestream.hlsStorageBackend=oss

standard mode creates chart-managed StatefulSet and Service resources:

postgresql:
mode: standard
redis:
mode: standard

kubeblocks mode creates KubeBlocks Cluster resources if KubeBlocks is installed:

postgresql:
mode: kubeblocks
redis:
mode: kubeblocks

In KubeBlocks mode, database credentials come from KubeBlocks-generated Secrets.

Note: the KubeBlocks Redis Sentinel component is part of the database operator topology. It does not automatically configure SyncTV as a redis.deployment_mode=sentinel client. The chart still injects a stable Redis Service endpoint into SyncTV, and SyncTV cluster mode must not be combined with SyncTV Sentinel mode.

Default:

config:
dataDir: "/data"

The Deployment mounts /data. The default is emptyDir, which is appropriate for runtime temporary files. If runtime files must persist, set persistence.data.existingClaim.

The chart does not enable cluster mode by default and does not declare HLS shared storage by default. Multi-replica HLS has two models: local backends work through publisher-node HLS proxying; shared filesystem or OSS lets any replica read segments directly and is better for high production traffic.

Local backend example:

config:
cluster:
enabled: true
livestream:
hlsStorageBackend: "memory"

This does not require an HLS PVC, but playlist/segment requests on non-publisher Pods proxy back to the publisher Pod over gRPC.

Shared filesystem example:

config:
cluster:
enabled: true
livestream:
hlsStorageBackend: "file"
hlsSharedStorage: true
hlsStoragePath: "/var/lib/synctv/hls"
persistence:
hls:
existingClaim: "synctv-hls-rwx"

Helm rejects these combinations during rendering:

  • hlsStorageBackend is not memory, file, or oss.
  • hlsStorageBackend=file with an empty hlsStoragePath.
  • hlsStorageBackend=file in Kubernetes with a non-absolute hlsStoragePath.
  • hlsSharedStorage=true without persistence.hls.existingClaim, so emptyDir cannot be mistaken for shared storage.

OSS example:

config:
cluster:
enabled: true
livestream:
hlsStorageBackend: "oss"
hlsOss:
endpoint: "https://s3.example.com"
bucket: "synctv-hls"
basePath: "synctv/hls/"
secrets:
livestream:
hlsOss:
accessKeyId: "..."
secretAccessKey: "..."

Whenever config.cluster.enabled=true, application startup validation also requires Redis, a stable SYNCTV_SERVER_CLUSTER_SECRET shared by every replica, and a usable SYNCTV_SERVER_ADVERTISE_HOST for node-to-node communication. Helm defaults inject Redis connection details, generate the cluster secret, and use the Pod IP as the advertise host; preserve those conditions when trimming values or using external Secrets. If livestream HLS uses a local backend, Pod-to-Pod gRPC reachability is also required because remote segment reads depend on publisher-node proxying.

When config.cluster.discoveryMode=k8s_dns, the chart automatically renders a headless Service and injects HEADLESS_SERVICE_NAME plus POD_NAMESPACE. When config.cluster.leaderElectionMode=k8s_lease, the chart injects POD_NAME and POD_NAMESPACE. Both modes require an image built with the k8s feature.

Enable metrics:

metrics:
enabled: true
auth:
mode: bearer_token

Prometheus Operator:

metrics:
serviceMonitor:
enabled: true

VictoriaMetrics:

metrics:
vmServiceScrape:
enabled: true

The metrics selector should target the API component Service and avoid the gRPC Service.

If metrics.auth.mode=kubernetes is used, the SyncTV binary in the image must be compiled with the k8s feature. Helm renders RBAC, service account token settings, and scrape resources, but cannot change image compile-time features.

Temporary checks:

Terminal window
helm lint ./helm/synctv
helm template synctv ./helm/synctv
helm template synctv ./helm/synctv --set ingress.grpc.enabled=true

Successful rendering only proves the manifests are syntactically valid. You still need runtime config validation and startup logs.