Web Frontend — professional tutorial¶
Released in FCC v1.2.1. You are integrating Web Frontend into a production FCC deployment with real compliance obligations. This tutorial covers the Helm chart wiring, the K8s probe expectations (the v1.2.1 probes were tuned specifically for this subsystem's cold-start characteristics), the observability hooks, and the failure modes you need to handle.
What this subsystem does¶
The Web Frontend subsystem provides the FCC WebSocket bridge that connects the EventBus to browser clients, enabling the React frontend (or any custom dashboard) to receive real-time events.
The implementation lives at src/fcc/protocols/ws_bridge.py. It
ships with FCC core (no separate install needed) and is exercised by:
- Notebook 28-31 — see
notebooks/28_web_frontend_walkthrough.ipynbfor the executable walkthrough - Streamlit demo — see
apps/streamlit/demo_web_frontend.pyfor the interactive UI - Notebook 32 — the full-stack ecosystem demo wires this subsystem with the other three
Focus persona: RER — Real-time Event Renderer¶
We anchor this professional-track tutorial on RER because that's the persona most relevant to a professional use of the Web Frontend subsystem.
from fcc._resources import get_personas_dir
from fcc.personas.registry import PersonaRegistry
registry = PersonaRegistry.from_yaml_directory(get_personas_dir())
persona = registry.get("RER")
print(persona.name)
print(persona.role_title)
print(persona.riscear.role)
Production checklist¶
When deploying web-frontend in production:
- Resource limits — see
charts/fcc/values.yamlfor the v1.2.1-tuned probeinitialDelaySeconds(web-frontend = 30s, distiller = inherits backend, open-science = inherits backend, skyparlour = inherits backend) andtimeoutSeconds(HTTP probes have 3s defaults). Cold-start times are dominated bysentence-transformersandplotlyfor streamlit; backend.ai_cache/initialization for the WS bridge. - Helm chart —
helm install fcc oci://ghcr.io/.../fcc --version 0.2.0ships v1.2.1 of the application - K8s smoke test —
.github/workflows/k8s-smoke.ymlruns every PR; the v1.2.1 probe tuning was specifically driven by the cold-start characteristics of this subsystem's container - Observability — wire the subsystem's events into your tracing/metrics stack (see
src/fcc/observability/) - Failure modes — read the relevant integration test for the subsystem to understand what failure looks like end-to-end
Failure modes¶
- WebSocket bridge — disconnections are auto-handled (clients are removed from the connected set on broadcast failure). Watch for
WebSocket event queue fullwarnings; bump the queue maxsize if persistent. - Distiller bridge — mock mode never fails. Real mode (with
distiller_extinstalled) can raise on schema mismatches. - Open Science — gates that fail their threshold raise no exception; the failure surfaces in the
evaluate_fair_compliance()report'sfailed_gateslist. - Sky-Parlour — transformer registration is the only fail point; the bridge doesn't validate transformer return types at registration time.
Audit trail¶
The RER persona's riscear.role field documents the responsibility this persona holds when operating the subsystem in production. Use it as the source of truth in your incident postmortems.
What you learned¶
- The Helm chart, K8s probes, and observability stack are all wired up
- Failure modes are well-defined per subsystem
- The persona R.I.S.C.E.A.R. spec is the audit trail for production responsibilities
Verification¶
Run the focused test suite for this subsystem:
All tests should pass on a clean v1.2.1 install. If they don't, check
that you have the optional deps from the [full] extras group:
Next steps¶
- Notebook walkthrough — same flow in an executable notebook
- Streamlit demo — interactive UI version
- Full-stack ecosystem demo — all four subsystems wired together
src/fcc/protocols/ws_bridge.py— the source module- Coverage ratchet — what test coverage this subsystem currently has and where the v1.2.x ratchet plan is heading