4+1 Physical View¶
The Physical View captures where FCC runs. The framework ships three deployment targets: a four-container Docker Compose stack for local development, a Helm chart for Kubernetes, and an ecosystem-level port map that coordinates 19 projects across the Ideate constellation. This page complements the Development View — Development shows the source tree, Physical shows the runtime instances.
Three diagrams follow: the Compose stack, the Helm / Kubernetes topology, and the ecosystem port map.
Docker Compose stack¶
docker-compose.yml at the repo root brings up four services built
from docker/Dockerfile.{backend,frontend,streamlit,jupyter}. The
backend runs fcc protocol ws-bridge exposing WebSockets on port
8765; the other three services are developer-facing UIs.
Figure 1 shows the Compose topology with port mappings and traffic flow.
flowchart TD
User([Developer / Browser])
subgraph Compose["docker-compose.yml"]
Backend["backend<br/>fcc protocol ws-bridge<br/>:8765"]
Frontend["frontend<br/>React + Vite<br/>:5173"]
Streamlit["streamlit<br/>27 apps<br/>:8501"]
Jupyter["jupyter<br/>23 notebooks<br/>:8888"]
end
User -->|HTTP| Frontend
User -->|HTTP| Streamlit
User -->|HTTP| Jupyter
Frontend -->|WebSocket| Backend
Streamlit -->|in-process<br/>python import| Backend
Jupyter -->|in-process<br/>python import| Backend
Backend -->|HTTP /health| Backend
The make docker-{build,up,down,logs,test} targets drive the stack
for everyday use; docker-compose.prod.yml overlays
production-hardening (resource limits, read-only root filesystems).
Kubernetes topology¶
For production the Helm chart at charts/fcc/ installs the same four
services as Kubernetes Deployments, Services, and a single Ingress.
The chart ships at v0.2.5 in v1.3.5.3 and matches
appVersion: 1.3.5.3.
Figure 2 is a PlantUML deployment diagram of the chart-rendered topology.
@startuml
node "Kubernetes cluster" {
node "fcc namespace" {
artifact "Ingress\n(frontend-ingress.yaml)" as ing
node "Deployment: backend" as db {
component "fcc-backend Pod\n:8765 ws-bridge\n/health HTTP" as bp
}
node "Deployment: frontend" as df {
component "fcc-frontend Pod\n:5173 React+Vite" as fp
}
node "Deployment: streamlit" as ds {
component "fcc-streamlit Pod\n:8501" as sp
}
node "StatefulSet: jupyter" as dj {
component "fcc-jupyter Pod\n:8888\nPVC /home/jovyan" as jp
}
artifact "Service: backend\nClusterIP :8765" as sb
artifact "Service: frontend\nClusterIP :80 -> 5173" as sf
artifact "Service: streamlit\nClusterIP :80 -> 8501" as ss
artifact "Service: jupyter\nClusterIP :80 -> 8888" as sj
artifact "ConfigMap: fcc-config" as cm
artifact "Secret: fcc-api-keys" as sec
artifact "ServiceAccount + RBAC" as sa
}
}
ing --> sf
ing --> ss
ing --> sj
sf --> fp
sb --> bp
ss --> sp
sj --> jp
fp ..> sb : WebSocket
cm ..> bp : env
sec ..> bp : env
sa ..> bp : identity
@enduml
A values.yaml override lets operators pin image tags, turn
individual services on or off, and inject API keys via an external
Secret reference.
Ecosystem port map¶
FCC is one node in a 19-project ecosystem coordinated through the
Research Center's authoritative port_allocation.yaml (RC v2.0.0,
ADR-007). FCC v1.3.5.4 vendors that file under
src/fcc/data/ecosystem/port_allocation.yaml with a SHA256 drift
check in CI.
Figure 3 is the ecosystem-wide port map grouped by tier.
flowchart TB
subgraph T1["Tier 1 — Ecosystem partners (8)"]
NEXUS["NEXUS<br/>3200 MCP / 3300 A2A / 3301 UX"]
CRUCIBLE["CRUCIBLE / AOME<br/>3100 MCP+A2A"]
PRISM["PRISM / CONSTEL<br/>3400 MCP+A2A"]
PHOENIX["PHOENIX<br/>8200 A2A"]
CTO["CTO<br/>8000 FastAPI"]
SENTINEL["SENTINEL / PAOM<br/>9001 MCP + 9010-9013"]
FORNAX["FORNAX / Distiller<br/>8002 + 5175"]
AURORA["AURORA / Sky Parlour<br/>3001 + 5173 + 8300-8399"]
end
subgraph T2["Tier 2 — JV L2 libraries (2)"]
ATHENIUM["ATHENIUM<br/>8500 library_only"]
MNEMOSYNE["MNEMOSYNE<br/>8510 library_only"]
end
subgraph T3["Tier 3 — Constellation verticals (10)"]
OPHIUCHUS["OPHIUCHUS 8600<br/>medical ontology"]
SERPENS["SERPENS 8601<br/>medical metadata"]
LIBRA["LIBRA 8610<br/>financial ontology"]
CRATER["CRATER 8611<br/>financial metadata"]
SCUTUM["SCUTUM 8620<br/>insurance ontology"]
NORMA["NORMA 8621<br/>insurance metadata"]
PYXIS["PYXIS 8630<br/>energy ontology"]
VELA["VELA 8631<br/>energy metadata"]
COLUMBA["COLUMBA 8640<br/>gov ontology"]
CAELUM["CAELUM 8641<br/>gov metadata"]
end
NEXUS -.authoritative registry.-> T1
NEXUS -.authoritative registry.-> T2
NEXUS -.authoritative registry.-> T3
T2 -.VocabularyProviderPlugin.-> T1
T3 -.VocabularyProviderPlugin.-> T1
How the ports were rationalised¶
The constellation range moved from 8300-8341 to 8600-8641 on
2026-04-21 (RC ADR-007) to preserve AURORA's reserved
8300-8399 range. The rule is simple: every project owns a
100-port block claimed in reserved_ranges, every service inside a
project picks a unique port within its block, and every change
requires two partner-lead reviews. make audit-ports verifies
there is no conflict before a change merges.
The vendored port_allocation.yaml and ecosystem_projects.yaml
inside FCC are byte-identical to the NEXUS source, which means any
drift flips a CI red. This keeps FCC's documentation — the tables in
docs/ecosystem/alignment-status.md and the port map above —
automatically aligned with the registry of record.
Production deployment of the full ecosystem is out of scope for the FCC repo itself; the Helm chart above covers FCC's own four containers, and each sibling project ships its own deployment story. The shared invariant is the port map.
See also¶
docker-compose.yml,docker-compose.prod.ymlcharts/fcc/values.yamlandcharts/fcc/templates/*.yamlsrc/fcc/data/ecosystem/port_allocation.yamlsrc/fcc/data/ecosystem/ecosystem_projects.yamldocs/ecosystem/alignment-status.mddocs/deployment/docker.md,docs/deployment/kubernetes.md- Development View
- Context Diagram