Local stack walkthrough¶
Bring up the full FCC stack — backend, frontend, Streamlit, JupyterLab —
in one command using docker compose.
Start¶
make docker-build # Build all 4 images (~5 minutes first time, cached after)
make docker-up # Start the stack in the background
Compose creates these containers:
| Container | Image | Host port |
|---|---|---|
fcc-backend |
fcc-backend:dev |
8765 (WebSocket + /health) |
fcc-frontend |
fcc-frontend:dev |
8080 (React SPA + nginx) |
fcc-streamlit |
fcc-streamlit:dev |
8501 (one Streamlit app at a time) |
fcc-jupyter |
fcc-jupyter:dev |
8888 (JupyterLab) |
Verify¶
After ~30 seconds (the time it takes for the backend health check to pass and the frontend to start):
# Backend health
curl http://localhost:8765/health
# {"status":"ok","service":"fcc-ws-bridge"}
# Frontend SPA
curl -s http://localhost:8080 | head -3
# Streamlit app
curl -s http://localhost:8501/_stcore/health
# ok
# JupyterLab API
curl -s http://localhost:8888/api/status
Or open them in a browser:
- React frontend: http://localhost:8080
- Streamlit app: http://localhost:8501
- JupyterLab: http://localhost:8888
Tail logs¶
Or for a specific service:
Stop¶
Configure each service¶
Backend AI provider¶
By default the backend uses the mock provider so the stack works
without any API keys. Set environment variables in a .env file at
the repo root to switch providers:
# .env
FCC_DEFAULT_PROVIDER=ollama
OLLAMA_BASE_URL=http://host.docker.internal:11434/v1
OLLAMA_DEFAULT_MODEL=llama3.2:latest
The compose file picks them up automatically.
Streamlit app selection¶
The default is persona_explorer.py. Switch to any other app from
apps/streamlit/:
Or run multiple Streamlit apps simultaneously by adding a second service
to your docker-compose.override.yml.
JupyterLab token¶
Local dev defaults to no token (unsafe — never expose port 8888 to the internet without one):
The production overlay (docker-compose.prod.yml) requires a token
or fails fast at startup.
Production overlay¶
For a more locked-down deployment on a single host:
The overlay adds:
restart: alwayson every service- CPU and memory resource limits
- JSON-file log rotation (10 MB × 5 files)
- A required
JUPYTER_TOKEN(compose fails to start if unset)
Production note: the production overlay does NOT add HTTPS, TLS certificates, or external authentication. For internet-facing deployments, terminate TLS at a reverse proxy in front of the stack (Caddy, Traefik, nginx, AWS ALB, etc.). Real production hardening documentation lands in v1.1.1 alongside the Helm chart.
Frontend ↔ backend data flow¶
sequenceDiagram
participant Browser
participant Nginx as fcc-frontend (nginx)
participant Bridge as fcc-backend (ws-bridge)
participant Bus as EventBus
Browser->>Nginx: GET / (load React SPA)
Nginx-->>Browser: index.html + JS bundle
Browser->>Nginx: WebSocket Upgrade /ws
Nginx->>Bridge: proxy_pass /ws (Connection: upgrade)
Bridge-->>Nginx: 101 Switching Protocols
Nginx-->>Browser: 101 Switching Protocols (tunneled)
Note over Bridge,Bus: Bridge subscribes to EventBus<br/>at startup via run_bridge()
Bus->>Bridge: Event(simulation.step)
Bridge->>Bridge: enqueue_event() → JSON serialize
Bridge-->>Browser: WebSocket frame (JSON event)
Browser->>Nginx: GET /health
Nginx->>Bridge: proxy_pass /health (HTTP, not upgrade)
Bridge-->>Nginx: 200 {"status":"ok"} via process_request callback
Nginx-->>Browser: 200 OK
Browser->>Nginx: GET /api/anything (reserved)
Nginx-->>Browser: 404 {"error":"reserved"} (no proxy)
The /health endpoint is special: nginx forwards it to the backend, but
the backend's websockets.serve(process_request=...) callback intercepts
the request before the WebSocket upgrade and replies with HTTP 200.
This works without any HTTP framework dependency — see
src/fcc/protocols/ws_bridge.py
for the implementation.
Frontend → backend wiring¶
The nginx config inside fcc-frontend proxies WebSocket traffic from
/ws to backend:8765 via Compose's internal DNS:
upstream fcc_backend {
server backend:8765;
}
location /ws {
proxy_pass http://fcc_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
...
}
The /api/* route is reserved-but-not-implemented in v1.1.0
(returns a documented 404 — see known limitations).
Networking¶
All four services share the default Compose network. The frontend
references the backend by service name (backend), not by host. To
expose the backend to processes outside the Compose stack, add a
networks: block or expose more host ports.
Tear down completely¶
make docker-down
docker image rm fcc-backend:dev fcc-frontend:dev fcc-streamlit:dev fcc-jupyter:dev
docker volume prune # if you mounted volumes for notebook persistence
See also¶
- Docker quickstart — build and run a single image
- Known limitations —
/api, MCP/A2A status, registry plans - AI providers — pointing the backend at a real LLM