Docker quickstart¶
Run the FCC backend in a single container without installing Python or any of FCC's optional dependencies.
Prerequisites¶
- Docker 24+ (for
docker buildx) - ~2 GB free disk for the image layers
Build all four images¶
This runs docker build against each Dockerfile in docker/:
| Image | Built from | Approximate size |
|---|---|---|
fcc-backend:dev |
docker/Dockerfile.backend |
~600 MB |
fcc-frontend:dev |
docker/Dockerfile.frontend |
~50 MB (nginx alpine) |
fcc-streamlit:dev |
docker/Dockerfile.streamlit |
~700 MB |
fcc-jupyter:dev |
docker/Dockerfile.jupyter |
~700 MB |
To build only one image:
make docker-build-backend
make docker-build-frontend
make docker-build-streamlit
make docker-build-jupyter
Or directly with docker buildx:
Run the backend alone¶
You should see:
In another terminal:
Environment variables¶
| Variable | Default | Purpose |
|---|---|---|
FCC_DEFAULT_PROVIDER |
mock |
Auto-selected AI provider (mock, anthropic, openai, azure_openai, ollama, litellm) |
FCC_LOG_LEVEL |
INFO |
DEBUG, INFO, WARNING, ERROR |
OPENAI_API_KEY |
(unset) | Triggers openai auto-detection |
ANTHROPIC_API_KEY |
(unset) | Triggers anthropic auto-detection |
AZURE_OPENAI_API_KEY + AZURE_OPENAI_ENDPOINT |
(unset) | Triggers azure_openai auto-detection |
OLLAMA_BASE_URL |
(unset) | Triggers ollama auto-detection (e.g. http://host.docker.internal:11434/v1) |
LITELLM_DEFAULT_MODEL |
(unset) | Triggers litellm auto-detection (e.g. ollama/llama3.2) |
Pass them with -e VAR=value or via an --env-file:
(See .env.example in the repo root for a starting template.)
Health check¶
The image declares a HEALTHCHECK directive that probes
http://127.0.0.1:8765/health every 30 seconds. Compose / Kubernetes /
your runtime can use this to detect a wedged container.
Image security¶
All v1.1.0 images:
- Run as non-root (UID 1001
fccuser) - Pin a specific Python version via
--build-arg PYTHON_VERSION(default 3.12) - Use multi-stage builds where possible to keep the runtime layer slim
- Do not include build tools, git, or shells beyond what's required
The CI workflow .github/workflows/docker.yml runs docker buildx build
on every PR. Future v1.1.1+ work adds Trivy/grype scanning to the same
job.
Pushing images yourself¶
v1.1.0 does not push to any registry. To push to your own registry:
docker tag fcc-backend:dev registry.example.com/fcc-backend:v1.1.0
docker push registry.example.com/fcc-backend:v1.1.0
For offline distribution:
docker save -o fcc-backend-v1.1.0.tar fcc-backend:dev
# Transfer the tarball, then on the target host:
docker load -i fcc-backend-v1.1.0.tar
See also¶
- Local stack walkthrough — bring up all four services together with compose
- Known limitations — what to expect in v1.1.0