Chapter 11: Case Studies¶
This chapter presents four case studies that illustrate how FCC components work together in realistic scenarios. Each case study follows the same structure: Context, Challenge, Solution, Results, and Lessons Learned.
The quadrant chart below places the four case studies covered in this chapter on a complexity-versus-impact grid, giving a quick sense of where each investment sits.
quadrantChart
title Use Case Complexity vs Impact
x-axis Low Complexity --> High Complexity
y-axis Low Impact --> High Impact
quadrant-1 Strategic Investments
quadrant-2 Quick Wins
quadrant-3 Low Priority
quadrant-4 Complex Foundations
"Custom Persona (DE Team)": [0.3, 0.6]
"CTO Ontology Integration": [0.7, 0.75]
"Multi-Team Governance": [0.65, 0.85]
"Design Sprint Collab": [0.45, 0.7]
Readers can use the chart as a rough sequencing guide: start in the lower-complexity quadrants, build muscle, then take on the strategic investments.
Case Study 1: Building a Custom Persona for a Data Engineering Team¶
Context¶
A platform engineering organization needed to standardize how its six-person data engineering team handled pipeline design reviews. The team used ad hoc checklists and tribal knowledge, leading to inconsistent review quality and missed edge cases.
Challenge¶
The team needed a persona that captured their collective expertise in a structured, repeatable format. The persona had to:
- Encode pipeline review criteria as constitution rules
- Integrate with the existing FCC workflow at the Create-to-Critique transition
- Support both batch and streaming pipeline patterns
- Produce artifacts compatible with the team's existing documentation tools
Solution¶
The team used the FCC CLI to scaffold a new persona:
They then defined the persona's R.I.S.C.E.A.R. specification in YAML:
- id: PRS
name: Pipeline Review Specialist
phase: Critique
riscear:
role: "Reviews data pipeline designs for correctness, efficiency, and resilience"
input: "Pipeline design documents, DAG definitions, schema specifications"
style: "Methodical, evidence-based, checklist-driven"
constraints: "Must reference established data engineering patterns"
expected_output: "Structured review report with severity-ranked findings"
archetype: "Quality Guardian"
responsibilities:
- "Validate pipeline DAG structure"
- "Check schema evolution compatibility"
- "Assess error handling and retry logic"
role_skills: ["data-modeling", "dag-design", "schema-evolution"]
role_collaborators: ["DE", "BC", "DGS"]
adoption_checklist:
- "Review team's existing pipeline standards"
- "Calibrate severity thresholds with team leads"
Constitution rules were added to enforce the team's hard requirements:
doc_context:
constitution:
hard_stop:
- "Never approve a pipeline without idempotency verification"
mandatory:
- "All reviews must check for schema backward compatibility"
preferred:
- "Recommend partitioning strategy for large datasets"
The persona was validated with fcc validate and integrated into the team's
extended workflow graph.
Results¶
- Review consistency improved measurably: the team tracked a 40% reduction in post-deployment pipeline issues over three months.
- Onboarding time for new reviewers dropped because the persona's constitution and checklist codified what had been tribal knowledge.
- The persona's
role_collaboratorsfield connected it to the Data Governance Specialist (DGS) and Documentation Evangelist (DE), creating automatic cross-reference links.
Lessons Learned¶
- Start with hard-stop rules that encode non-negotiable requirements; add preferred patterns iteratively as the team discovers best practices.
- Use the
adoption_checklistfield as an onboarding tool — it doubles as a setup guide for anyone adopting the persona. - Validate early and often with
fcc validateto catch YAML schema issues before they propagate.
Case Study 2: Integrating FCC with an Enterprise Ontology (CTO-Inspired)¶
Context¶
A technology strategy team maintained a Common Technology Ontology (CTO) with 500+ vocabulary terms organized into a 4-level hierarchy. They wanted to use FCC personas to drive ontology maturity assessments and cross-vocabulary mapping.
Challenge¶
- The ontology was stored in a proprietary format incompatible with FCC's YAML data model.
- Maturity assessments were done manually in spreadsheets with no audit trail.
- Cross-vocabulary mappings between the CTO and external standards (TOGAF, ITIL) existed but were not systematically maintained.
Solution¶
The team used the CTO bridge's three-layer architecture (see Chapter 10):
- Abstract protocols — they implemented
OntologyProviderandMappingProviderfor their proprietary format. - Concrete bridge — a plugin translated between the proprietary format and FCC's data model, exposing vocabulary terms as FCC-compatible YAML.
- Educational data — they created sample assessments and mappings for training purposes.
The maturity assessment workflow used a dedicated persona (Object Model Assessor)
that evaluated each vocabulary term against five maturity levels. Assessment
results were stored as QualityScore objects and tracked through the
collaboration engine.
Cross-vocabulary mappings were implemented using the mapping provider, which produced scored mappings with confidence levels:
mapping = bridge.create_mapping(
source_term="API Gateway",
target_vocabulary="TOGAF",
target_term="Technology Component",
confidence=0.85,
)
Results¶
- Maturity assessments became reproducible: the same persona and rubric produced consistent scores across assessors.
- Cross-vocabulary mapping coverage increased from 30% to 78% of CTO terms.
- The audit trail from the collaboration engine satisfied the organization's compliance requirements for ontology governance.
Lessons Learned¶
- The abstract protocol layer is worth the upfront investment — it allowed the team to swap their proprietary format for a standard one later without changing FCC integration code.
- Maturity assessment is most valuable when tied to quality gates that block promotion of immature vocabulary terms.
- Start with the highest-traffic vocabulary terms to demonstrate value quickly.
Case Study 3: Setting Up Governance for a Multi-Team AI Project¶
Context¶
An AI platform serving four product teams needed consistent governance across persona-driven workflows. Each team had different compliance requirements: one handled healthcare data (HIPAA), another financial data (SOX), and two operated in less regulated domains.
Challenge¶
- Governance rules varied by team, but some rules (e.g., PII handling) were universal.
- Teams wanted autonomy to define their own preferred patterns without overriding shared hard-stop rules.
- Quality gate thresholds needed to be stricter for regulated teams.
Solution¶
The team implemented a layered governance model using FCC's plugin system:
Layer 1: Shared hard-stop rules were defined in the base persona constitutions:
hard_stop:
- "Never include PII in training data without explicit consent"
- "Never deploy a model without bias evaluation"
Layer 2: Team-specific mandatory patterns were added via governance plugins:
class HIPAAGovernancePlugin:
plugin_type = "governance"
def register_rules(self):
return [
ConstitutionRule(
id="HIPAA-001",
name="PHI De-identification",
description="All PHI must be de-identified before processing",
tier=1, # hard-stop for this team
category="healthcare",
),
]
Layer 3: Team-specific quality gates adjusted thresholds:
quality_gates:
- id: QG-HIPAA-001
name: PHI De-identification Verification
persona_id: ALL
threshold: 1.0 # 100% required for healthcare team
checks: [phi_scan, consent_verification, audit_log]
The teams shared a single PersonaRegistry but used different
ConstitutionRegistry configurations assembled from their governance plugins.
Results¶
- All four teams used the same FCC framework with team-specific governance overlays.
- The healthcare team's HIPAA plugin caught two potential PHI exposures during the first month.
- Teams could promote best practices from "preferred" to "mandatory" without coordinating with other teams.
Lessons Learned¶
- Start with shared hard-stop rules and let teams add mandatory/preferred patterns independently.
- Use governance plugins rather than editing shared YAML files — this prevents merge conflicts and preserves team autonomy.
- Quality gate thresholds should be configurable per-team, not hard-coded.
Case Study 4: Deploying the Collaboration Engine for a Design Sprint¶
Context¶
A product team ran a week-long design sprint to define a new recommendation engine. The sprint involved three human designers and five FCC agent personas (Research Crafter, Blueprint Crafter, UI Mockup Crafter, Blueprint Validator, and Documentation Evangelist).
Challenge¶
- The team needed structured turn-taking between humans and agents.
- Deliverables had to pass quality gates before advancing to the next sprint phase.
- The sprint needed to produce a complete, auditable record for stakeholder review.
Solution¶
The team configured a collaboration session with the following setup:
from fcc.collaboration.engine import CollaborationEngine
from fcc.collaboration.models import ApprovalGate, HandoffProtocol
engine = CollaborationEngine()
session = engine.create_session(
workflow_id="extended_20",
participants=("human-alice", "human-bob", "human-carol", "RC", "BC", "UMC", "BV", "DE"),
gates=(
ApprovalGate(gate_id="g-research", workflow_node_id="find-review", required_score=3.5),
ApprovalGate(gate_id="g-blueprint", workflow_node_id="create-review", required_score=4.0),
ApprovalGate(gate_id="g-validation", workflow_node_id="critique-review", required_score=3.5),
),
handoff_protocol=HandoffProtocol(
max_consecutive_agent_turns=2,
auto_approve_threshold=4.5,
escalation_threshold=2.0,
),
)
The HandoffProtocol ensured that agents could not take more than two
consecutive turns without human review. The auto_approve_threshold of 4.5
meant that only exceptional deliverables (score >= 4.5) could bypass human
approval.
During the sprint, the SharedContext tracked evolving design decisions:
engine.set_context(session.session_id, "target_users", "power users", actor="human-alice")
engine.set_context(session.session_id, "rec_algorithm", "collaborative filtering", actor="agent-RC")
At the end of each day, the SessionRecorder saved the session state. On the
final day, the complete session was replayed through the event bus to generate
a sprint summary report.
Results¶
- The sprint produced a complete recommendation engine blueprint with 23 reviewed artifacts.
- All three quality gates were passed, with average scores of 4.1, 3.8, and 4.3 respectively.
- The auditable session record satisfied the stakeholder review requirement without additional documentation effort.
- The
SharedContexthistory showed how design decisions evolved over the week, which proved valuable in post-sprint retrospectives.
Lessons Learned¶
- Set
max_consecutive_agent_turnsbased on the complexity of deliverables — lower values for high-stakes outputs, higher for routine ones. - Use
SharedContextas the single source of truth for design decisions; this eliminates the need for separate decision logs. - Replay sessions through the event bus to generate summary reports rather than writing them manually.
- The
HandoffProtocol.escalation_thresholdshould be calibrated during the first sprint day and adjusted based on actual score distributions.
Key Takeaways
- FCC components combine to address real-world challenges across data engineering, enterprise ontology, multi-team governance, and collaborative design.
- Custom personas should start with hard-stop rules and grow their constitutions iteratively.
- Governance plugins provide team autonomy without sacrificing shared safety rails.
- The collaboration engine's session recording and replay capabilities reduce documentation overhead.
- The CTO bridge's three-layer architecture supports integration with proprietary systems while maintaining portability.