Skip to content

Helm chart reference

The FCC Helm chart lives at charts/fcc/ in the source repository.

Chart: fcc
Chart version: 0.1.0
App version: 1.1.1
Kubernetes: >=1.25.0

This doc is the reference for every configurable knob. For a narrative walkthrough, see kubernetes.md.

Install sources

1. Local source tree (dev / contributors)

git clone https://github.com/rollingthunderfourtytwo-afk/l2_fcc_agent_team_ext.git
cd l2_fcc_agent_team_ext
helm install fcc ./charts/fcc

2. Sister fcc-helm repo (end users, once configured)

helm repo add fcc https://rollingthunderfourtytwo-afk.github.io/fcc-helm
helm repo update
helm install fcc fcc/fcc

The sister repo is populated automatically by .github/workflows/helm-mirror.yml on every tag push. See helm-repo-mirror.md for the one-time setup.

3. OCI registry (advanced)

helm install fcc oci://ghcr.io/rollingthunderfourtytwo-afk/helm/fcc --version 1.1.1

Values reference

The full annotated default is at charts/fcc/values.yaml. Production overlay: values-prod.yaml.

Global

Key Type Default Purpose
global.imageRegistry string ghcr.io/rollingthunderfourtytwo-afk Registry for all 4 images
global.imageTag string "1.1.1" Tag applied to all 4 images unless overridden per-service
global.imagePullPolicy string IfNotPresent Pull policy for all services
global.imagePullSecrets list [] Pull secret names (for private registries)
global.ai.defaultProvider string mock mock / anthropic / openai / azure_openai / ollama / litellm / vllm
global.ai.anthropicApiKey string "" Use --set-string or pre-created Secret
global.ai.openaiApiKey string "" Same
global.ai.azureOpenaiApiKey string "" Same
global.ai.azureOpenaiEndpoint string "" Azure endpoint URL
global.ai.ollamaBaseUrl string "" Triggers Ollama auto-detection when set
global.ai.ollamaDefaultModel string llama3.2:latest Default model for Ollama
global.ai.litellmDefaultModel string "" Triggers LiteLLM auto-detection when set
global.ai.vllmBaseUrl string "" Triggers vLLM auto-detection when set
global.ai.vllmDefaultModel string "" Default model for vLLM

Service account + RBAC

Key Type Default Purpose
serviceAccount.create bool true Create a dedicated ServiceAccount
serviceAccount.name string "" SA name (falls back to <release>-fcc)
serviceAccount.annotations object {} Annotations (useful for AWS IRSA, GCP Workload Identity)
rbac.create bool true Create Role + RoleBinding (Role-scoped, not ClusterRole)

Backend (WebSocket bridge)

Key Type Default Purpose
backend.enabled bool true
backend.image.repository string fcc-backend
backend.image.tag string "" Falls back to global.imageTag
backend.replicas int 1 Horizontal scale (prod: 2)
backend.service.port int 8765 WS + /health port
backend.logLevel string INFO DEBUG / INFO / WARNING / ERROR
backend.resources.requests.cpu string 100m
backend.resources.requests.memory string 256Mi
backend.resources.limits.cpu string 1000m
backend.resources.limits.memory string 1Gi
backend.podSecurityContext.runAsNonRoot bool true
backend.podSecurityContext.runAsUser int 1001
backend.livenessProbe object TCP :8765 Override full probe spec
backend.readinessProbe object TCP :8765
backend.nodeSelector object {}
backend.tolerations list []
backend.affinity object {}

Frontend (React + nginx)

Same shape as backend, plus:

Key Type Default Purpose
frontend.ingress.enabled bool false Enable Ingress resource
frontend.ingress.className string "" e.g. nginx / traefik
frontend.ingress.annotations object {}
frontend.ingress.hosts list [{host: fcc.example.com, ...}]
frontend.ingress.tls list []

Streamlit

Key Type Default Purpose
streamlit.enabled bool true
streamlit.app string persona_explorer.py Which app under apps/streamlit/ to run
streamlit.service.port int 8501

Jupyter

Key Type Default Purpose
jupyter.enabled bool true
jupyter.token string "" Required in production
jupyter.service.port int 8888
jupyter.persistence.enabled bool true Use a PVC for notebook content
jupyter.persistence.storageClassName string "" Cluster default if empty
jupyter.persistence.accessMode string ReadWriteOnce
jupyter.persistence.size string 10Gi
jupyter.persistence.mountPath string /app/notebooks

Common install patterns

Local dev (kind / minikube / k3d)

helm install fcc ./charts/fcc \
  --set global.imagePullPolicy=Never \
  --set jupyter.persistence.enabled=false

Hosted Anthropic backend + external frontend

helm install fcc ./charts/fcc \
  --set global.ai.defaultProvider=anthropic \
  --set-string global.ai.anthropicApiKey="$ANTHROPIC_API_KEY" \
  --set frontend.ingress.enabled=true \
  --set frontend.ingress.className=nginx \
  --set frontend.ingress.hosts[0].host=fcc.example.com

Self-hosted vLLM cluster-internal

helm install fcc ./charts/fcc \
  --set global.ai.defaultProvider=vllm \
  --set global.ai.vllmBaseUrl=http://vllm.llm-serving.svc.cluster.local:8000/v1

High-availability backend only (disable other services)

helm install fcc ./charts/fcc \
  --set backend.replicas=3 \
  --set frontend.enabled=false \
  --set streamlit.enabled=false \
  --set jupyter.enabled=false

Release lifecycle

# Install
helm install fcc ./charts/fcc

# Upgrade
helm upgrade fcc ./charts/fcc --reuse-values --set backend.replicas=3

# Rollback
helm history fcc
helm rollback fcc 1

# Uninstall (PVCs preserved)
helm uninstall fcc

# Uninstall (full wipe, including PVCs)
helm uninstall fcc
kubectl delete pvc -l app.kubernetes.io/instance=fcc

See also