Insurance vertical — scientific tutorial¶
Released in FCC v1.2.0. You are running controlled experiments on LLM behavior in the Insurance domain. This tutorial shows how to instrument a scenario with CLEAR+ benchmarks, swap providers via the
ai_configscenario override, and measure risk-classification stability across runs.
The Insurance pack in one paragraph¶
The insurance vertical pack (at src/fcc/data/verticals/insurance.yaml) contains 6 personas spanning underwriting, actuarial reserving, IFRS 17 / Solvency II reporting, claims fraud detection, reinsurance structuring, and parametric / climate risk. Headline compliance frameworks: ACORD Reference Architecture, IFRS 17, Solvency II, NAIC Model Laws.
Focus persona: PCR — Parametric & Climate Risk Designer¶
We'll anchor this tutorial on PCR, because it's the one most relevant to the scientific audience in the Insurance domain.
from fcc.verticals.registry import VerticalRegistry
reg = VerticalRegistry.from_builtin()
pack = reg.get("insurance")
persona = next(p for p in pack.personas if p.id == "PCR")
print(persona.name)
print(persona.risk_category or "minimal")
riscear = persona.riscear or {}
print("Archetype:", riscear.get("archetype"))
print("Role:", riscear.get("role"))
Experiment design¶
You want to answer questions like "does swapping Anthropic for Ollama change how PCR classifies risk?" or "does LiteLLM routing add latency variance I should report?"
The v1.1.0+ ai_config scenario override lets you pin provider/model per scenario without touching the YAML:
# scenarios/insurance_rct.yaml
scenario_id: INS-RCT
ai_config:
provider: litellm
model: ollama/llama3.2
temperature: 0.0
max_tokens: 2000
Then run the CLEAR+ benchmark runner in --mock mode first to get a deterministic baseline:
Then swap to a real provider and compare:
fcc benchmark run --scenario INS-RCT --output _output/benchmarks/live.json
fcc benchmark compare baseline live
Stable risk classification under model swaps¶
The AIActClassifier in FCC is deterministic — it doesn't call the LLM. But persona outputs change across providers, so downstream classifiers that inspect outputs may drift.
from fcc.compliance.classifier import AIActClassifier
classifier = AIActClassifier()
# Stable across runs because the override is data-driven, not model-driven:
risk = classifier.classify_persona(persona, vertical_domain="insurance")
assert risk.value in {"minimal", "limited", "high", "unacceptable"}
This gives you a ground-truth label you can use as a reference in your experiments.
Verify what you did¶
Run the vertical test suite to make sure your changes didn't break anything:
All scientific-path steps in this tutorial leave your working tree unchanged — the pack YAML is read-only from your perspective. The only state that accumulates is in _output/ (scenario run traces) and docs/model-cards/ (if you regenerated cards).
Next steps¶
- Notebook 26 — Vertical packs tour — same flow in an executable notebook.
- Notebook 27 — Vertical packs deep dive — longer walkthrough of healthcare as an exemplar.
- Guidebook ch25 — Industry verticals — full authoring guide for your own pack.
- Book 3 ch11 — Vertical packs at enterprise scale — architectural view.
- Streamlit vertical_explorer — interactive browser for all 6 packs.
- Research note for insurance — cited standards sources behind the persona selection.