Skip to content

Skyparlour — professional tutorial

Released in FCC v1.2.1. You are integrating Skyparlour into a production FCC deployment with real compliance obligations. This tutorial covers the Helm chart wiring, the K8s probe expectations (the v1.2.1 probes were tuned specifically for this subsystem's cold-start characteristics), the observability hooks, and the failure modes you need to handle.

What this subsystem does

The Skyparlour subsystem provides the Sky-Parlour Visualization Bridge that transforms EventBus events into D3-friendly payloads (force graphs, Sankey diagrams, chord diagrams, heatmaps) for the React frontend.

The implementation lives at src/fcc/protocols/visualization_bridge.py. It ships with FCC core (no separate install needed) and is exercised by:

Focus persona: DVA — D3 Visualization Architect

We anchor this professional-track tutorial on DVA because that's the persona most relevant to a professional use of the Skyparlour subsystem.

from fcc._resources import get_personas_dir
from fcc.personas.registry import PersonaRegistry

registry = PersonaRegistry.from_yaml_directory(get_personas_dir())
persona = registry.get("DVA")
print(persona.name)
print(persona.role_title)
print(persona.riscear.role)

Production checklist

When deploying skyparlour in production:

  • Resource limits — see charts/fcc/values.yaml for the v1.2.1-tuned probe initialDelaySeconds (web-frontend = 30s, distiller = inherits backend, open-science = inherits backend, skyparlour = inherits backend) and timeoutSeconds (HTTP probes have 3s defaults). Cold-start times are dominated by sentence-transformers and plotly for streamlit; backend .ai_cache/ initialization for the WS bridge.
  • Helm charthelm install fcc oci://ghcr.io/.../fcc --version 0.2.0 ships v1.2.1 of the application
  • K8s smoke test.github/workflows/k8s-smoke.yml runs every PR; the v1.2.1 probe tuning was specifically driven by the cold-start characteristics of this subsystem's container
  • Observability — wire the subsystem's events into your tracing/metrics stack (see src/fcc/observability/)
  • Failure modes — read the relevant integration test for the subsystem to understand what failure looks like end-to-end

Failure modes

  • WebSocket bridge — disconnections are auto-handled (clients are removed from the connected set on broadcast failure). Watch for WebSocket event queue full warnings; bump the queue maxsize if persistent.
  • Distiller bridge — mock mode never fails. Real mode (with distiller_ext installed) can raise on schema mismatches.
  • Open Science — gates that fail their threshold raise no exception; the failure surfaces in the evaluate_fair_compliance() report's failed_gates list.
  • Sky-Parlour — transformer registration is the only fail point; the bridge doesn't validate transformer return types at registration time.

Audit trail

The DVA persona's riscear.role field documents the responsibility this persona holds when operating the subsystem in production. Use it as the source of truth in your incident postmortems.

What you learned

  • The Helm chart, K8s probes, and observability stack are all wired up
  • Failure modes are well-defined per subsystem
  • The persona R.I.S.C.E.A.R. spec is the audit trail for production responsibilities

Verification

Run the focused test suite for this subsystem:

pytest tests/test_visualization_bridge.py -v

All tests should pass on a clean v1.2.1 install. If they don't, check that you have the optional deps from the [full] extras group:

pip install -e ".[full]"

Next steps