Skip to content

Deploying FCC with Compose in production-like setups

Audience: DevOps engineers, platform teams, and professionals running FCC for internal teams or small production workloads.

K8s + Helm note: v1.1.0 ships Docker + Compose only. Kubernetes manifests, a Helm chart, and image-registry publishing land in v1.1.1. If you need K8s today, you can use these Dockerfiles as a starting point — the container interface is stable.

Architecture

flowchart LR
    inet[Internet] -->|HTTPS| caddy[Reverse proxy<br/>Caddy / Traefik / nginx]
    caddy -->|HTTP :8080| fe[fcc-frontend<br/>React + nginx]
    fe -->|/ws<br/>internal Compose net| be[fcc-backend<br/>WS bridge :8765]
    fe -->|/health<br/>internal| be
    caddy -->|HTTP :8888| jp[fcc-jupyter<br/>JupyterLab]
    caddy -->|HTTP :8501| st[fcc-streamlit<br/>Streamlit app]

    style be fill:#e1f5fe
    style fe fill:#fff3e0
    style st fill:#f3e5f5
    style jp fill:#e8f5e9
    style caddy fill:#fff

Production overlay

The repo ships a docker-compose.prod.yml overlay that adds:

  • restart: always on every service
  • CPU and memory limits
  • JSON-file log rotation (10 MB × 5 files)
  • A required JUPYTER_TOKEN (compose fails to start if unset)
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

Step 1: Production env file

Create /etc/fcc/.env:

# AI provider (one of: mock, anthropic, openai, azure_openai, ollama, litellm)
FCC_DEFAULT_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-prod-...

# OR use LiteLLM for multi-backend routing
# FCC_DEFAULT_PROVIDER=litellm
# LITELLM_DEFAULT_MODEL=anthropic/claude-3-5-sonnet-20241022

# Logging
FCC_LOG_LEVEL=INFO

# Required by docker-compose.prod.yml
JUPYTER_TOKEN=$(openssl rand -hex 32)

chmod 0600 /etc/fcc/.env and ensure only the service user can read it.

Step 2: Reverse proxy with TLS

The bundled images do not include TLS termination. Front the stack with Caddy, Traefik, nginx, or your cloud provider's load balancer.

Example Caddyfile:

fcc.example.com {
    reverse_proxy localhost:8080
    encode gzip
    log {
        output file /var/log/caddy/fcc-access.log
    }
}

streamlit.fcc.example.com {
    reverse_proxy localhost:8501
}

jupyter.fcc.example.com {
    reverse_proxy localhost:8888
}

Caddy will provision Let's Encrypt certificates automatically.

Step 3: Systemd unit

/etc/systemd/system/fcc.service:

[Unit]
Description=FCC Agent Team Extension
Requires=docker.service
After=docker.service network-online.target

[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/opt/fcc
EnvironmentFile=/etc/fcc/.env
ExecStart=/usr/bin/docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
ExecStop=/usr/bin/docker compose -f docker-compose.yml -f docker-compose.prod.yml down
TimeoutStartSec=300

[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable --now fcc

Step 4: Health monitoring

The backend exposes /health over HTTP on the same port as the WebSocket bridge:

curl http://localhost:8765/health
# {"status":"ok","service":"fcc-ws-bridge"}

Wire this into your monitoring stack:

  • Prometheus blackbox exporter: probe /health every 30 s
  • Cron + alerting: a simple curl --fail check
  • Cloud LB health check: most providers support HTTP 200 = healthy

Step 5: Backups

The stateful service in v1.1.0 is JupyterLab notebook content. Mount a volume for it:

# docker-compose.override.yml
services:
  jupyter:
    volumes:
      - /var/lib/fcc/notebooks:/app/notebooks

Back up /var/lib/fcc/notebooks with your usual tool (restic, rsync, borgbackup, etc.).

The backend is stateless in v1.1.0 — no persistent storage to back up.

Step 6: Rolling updates

Until v1.1.1 ships image registry support, updates require a rebuild:

cd /opt/fcc
git pull
make docker-build
systemctl restart fcc

For zero-downtime updates, use docker compose up --no-deps -d <service> which restarts services individually as their healthchecks pass.

Resource sizing

Service Min CPU Min RAM Storage
backend 0.25 cores 256 MB <100 MB
frontend 0.1 cores 64 MB <50 MB
streamlit 0.5 cores 512 MB <200 MB
jupyter 0.5 cores 512 MB varies (notebook content)
Total minimum ~1.4 cores ~1.4 GB ~500 MB

For production workloads add headroom: 2-4 cores and 4 GB RAM is comfortable for a small team.

Hardening checklist

  • TLS terminated at reverse proxy
  • JUPYTER_TOKEN set to a strong random value
  • .env file chmod 0600 and owned by service user
  • Reverse proxy denies /api/* (it's reserved-but-unimplemented in v1.1.0)
  • Docker socket NOT exposed to containers
  • Container runtime updated (docker --version)
  • Backup automation for notebook volume
  • Health endpoint wired to monitoring
  • Image rebuild + service restart documented as a runbook

For the full hardening guide (RBAC, secrets, container scanning, network policies), wait for v1.1.1 which adds K8s + Helm + a dedicated docs/deployment/security.md.

See also