Skip to content

Chapter 12: Hands-On Labs

This chapter contains 10 progressive labs that build on the concepts from previous chapters. Each lab includes an objective, prerequisites, step-by-step instructions, expected output, and a grading rubric.

Labs are designed to be completed in order, though Labs 1-3 can be done independently. Labs 4-9 build on earlier work, and Lab 10 is a capstone that integrates everything.

The Gantt chart below sketches the ten labs across three difficulty bands — Beginner, Intermediate, Advanced — with approximate session counts on the horizontal axis.

gantt
    title Lab Exercise Progression
    dateFormat X
    axisFormat %s

    section Beginner
    Lab 1 - Your First Persona       :done, l1, 0, 1
    Lab 2 - Workflow Walkthrough      :done, l2, 1, 2
    Lab 3 - Custom Dimensions         :done, l3, 2, 3

    section Intermediate
    Lab 4 - Event-Driven Simulation   :active, l4, 3, 5
    Lab 5 - Plugin Development        :l5, 5, 7
    Lab 6 - Object Model Assessment   :l6, 7, 9
    Lab 7 - Cross-Vocab Mapping       :l7, 9, 11

    section Advanced
    Lab 8 - Governance Setup          :l8, 11, 14
    Lab 9 - Collaboration Session     :l9, 14, 17
    Lab 10 - Capstone Full Pipeline   :crit, l10, 17, 22

Intermediate and Advanced labs build on earlier artifacts, so the left-to-right ordering matters from Lab 4 onwards.


Lab 1: Your First Persona

Objective: Create a custom persona, validate it against the schema, and load it into the persona registry.

Prerequisites: FCC installed (pip install -e .), familiarity with Chapter 2.

Difficulty: Beginner

Steps

  1. Create a file my_personas.yaml with the following structure:
- id: TST
  name: Test Specialist
  phase: Critique
  riscear:
    role: "Validates test coverage and quality for all deliverables"
    input: "Source code, test suites, coverage reports"
    style: "Precise, metric-driven, systematic"
    constraints: "Must cite specific test coverage thresholds"
    expected_output: "Test quality report with coverage gaps and recommendations"
    archetype: "Quality Guardian"
    responsibilities:
      - "Analyze test coverage"
      - "Identify untested edge cases"
      - "Recommend test strategies"
    role_skills: ["testing", "coverage-analysis", "quality-metrics"]
    role_collaborators: ["DE", "BV"]
    adoption_checklist:
      - "Review project test standards"
      - "Configure coverage thresholds"
  category: core
  1. Validate the YAML using the FCC CLI:
fcc validate --dir .
  1. Load the persona into a registry programmatically:
from fcc.personas.registry import PersonaRegistry

registry = PersonaRegistry.from_yaml_files(["my_personas.yaml"])
persona = registry.get("TST")
print(f"Loaded: {persona.name} ({persona.id}), phase={persona.phase}")
print(f"Archetype: {persona.riscear.archetype}")
  1. Verify the persona has all 10 R.I.S.C.E.A.R. components.

Expected Output: The persona loads without errors. All 10 R.I.S.C.E.A.R. fields are populated. The CLI validation reports no schema violations.

Rubric: - Persona YAML is valid (2 pts) - All 10 R.I.S.C.E.A.R. fields populated (3 pts) - Persona loads into registry successfully (3 pts) - Meaningful content in role and responsibilities (2 pts)


Lab 2: Workflow Walkthrough

Objective: Load the base 5-node workflow graph, traverse it, and identify the FCC phases.

Prerequisites: Lab 1 complete, familiarity with Chapter 4.

Difficulty: Beginner

Steps

  1. Load the base workflow:
from fcc._resources import get_workflows_dir
from fcc.workflow.graph import WorkflowGraph
import json

wf_path = get_workflows_dir() / "base_sequence.json"
with open(wf_path) as f:
    data = json.load(f)
graph = WorkflowGraph.from_dict(data)
  1. List all nodes and their phases:
for node in graph.nodes:
    print(f"Node: {node.id}, Phase: {node.phase}, Label: {node.label}")
  1. Traverse the graph from the first node to the last, following edges:
current = graph.nodes[0]
path = [current.id]
while graph.successors(current.id):
    current = graph.get_node(graph.successors(current.id)[0])
    path.append(current.id)
print("Traversal path:", " -> ".join(path))
  1. Identify which nodes correspond to Find, Create, and Critique phases.

Expected Output: A 5-node graph with nodes mapped to FCC phases. The traversal produces a linear path through all five nodes.

Rubric: - Graph loads without errors (2 pts) - All 5 nodes listed with correct phases (3 pts) - Traversal produces correct linear path (3 pts) - Phases correctly identified (2 pts)


Lab 3: Custom Dimensions

Objective: Create a dimension profile for a persona and query its attributes.

Prerequisites: Lab 1 complete, familiarity with Chapter 3.

Difficulty: Beginner

Steps

  1. Load the dimension registry:
from fcc._resources import get_data_dir
from fcc.personas.dimensions import DimensionRegistry

dim_registry = DimensionRegistry.from_yaml(get_data_dir() / "personas" / "dimension_definitions.yaml")
print(f"Categories: {len(dim_registry.categories)}")
print(f"Total dimensions: {dim_registry.total_dimensions}")
  1. Create a dimension profile for your Test Specialist persona:
from fcc.personas.dimensions import PersonaDimensionProfile, DimensionAttribute

profile = PersonaDimensionProfile(
    persona_id="TST",
    attributes=(
        DimensionAttribute(dimension_id="analytical_depth", value="high"),
        DimensionAttribute(dimension_id="communication_style", value="technical"),
        DimensionAttribute(dimension_id="risk_tolerance", value="low"),
    ),
)
  1. Query the profile:
for attr in profile.attributes:
    print(f"{attr.dimension_id}: {attr.value}")
  1. Verify that dimension IDs exist in the registry.

Expected Output: A dimension profile with 3+ attributes. All dimension IDs resolve to valid definitions in the registry.

Rubric: - Dimension registry loads correctly (2 pts) - Profile created with valid dimension IDs (3 pts) - At least 3 meaningful attributes defined (3 pts) - Dimension IDs validated against registry (2 pts)


Lab 4: Event-Driven Simulation

Objective: Set up an event bus, configure a mock simulation, and capture events.

Prerequisites: Labs 1-2 complete, familiarity with Chapter 6 and Chapter 7.

Difficulty: Intermediate

Steps

  1. Create an event bus and subscribe to simulation events:
from fcc.messaging.bus import EventBus
from fcc.messaging.events import EventType

bus = EventBus()
captured = []
bus.subscribe(EventType.SIMULATION_STARTED, lambda e: captured.append(e))
bus.subscribe(EventType.TURN_COMPLETED, lambda e: captured.append(e))
bus.subscribe(EventType.SIMULATION_COMPLETED, lambda e: captured.append(e))
  1. Configure a mock simulation engine:
from fcc.simulation.engine import SimulationEngine

engine = SimulationEngine(mode="mock", event_bus=bus)
  1. Run a simulation with the base workflow and your persona registry:
from fcc.personas.registry import PersonaRegistry
from fcc._resources import get_data_dir

registry = PersonaRegistry.from_yaml_directory(get_data_dir() / "personas")
trace = engine.run(workflow_id="base_5", registry=registry)
  1. Inspect captured events:
print(f"Total events captured: {len(captured)}")
for event in captured:
    print(f"  {event.event_type.value}: {event.data.get('message', '')[:60]}")
  1. Examine the simulation trace:
print(f"Trace ID: {trace.trace_id}")
print(f"Steps: {len(trace.steps)}")

Expected Output: Multiple events captured (at minimum: SIMULATION_STARTED, one or more TURN_COMPLETED, SIMULATION_COMPLETED). A trace with steps matching the workflow nodes.

Rubric: - Event bus configured with subscriptions (2 pts) - Mock simulation runs without errors (3 pts) - Events captured correctly (3 pts) - Trace contains expected steps (2 pts)


Lab 5: Plugin Development

Objective: Create a persona plugin that registers a custom persona and integrates with the plugin system.

Prerequisites: Labs 1-4 complete, familiarity with Chapter 5.

Difficulty: Intermediate

Steps

  1. Define a persona plugin class:
from fcc.plugins.core import PersonaPlugin

class SecurityPersonaPlugin(PersonaPlugin):
    plugin_id = "security-personas"
    plugin_version = "0.1.0"

    def get_personas(self):
        return [
            {
                "id": "SAR",
                "name": "Security Auditor",
                "phase": "Critique",
                "riscear": {
                    "role": "Audits deliverables for security vulnerabilities",
                    "input": "Architecture documents, code reviews, dependency manifests",
                    "style": "Thorough, adversarial-thinking, risk-focused",
                    "constraints": "Must reference OWASP Top 10 and CWE database",
                    "expected_output": "Security audit report with severity ratings",
                    "archetype": "Guardian",
                    "responsibilities": ["Vulnerability assessment", "Threat modeling"],
                    "role_skills": ["security-audit", "threat-modeling"],
                    "role_collaborators": ["BV", "BC"],
                    "adoption_checklist": ["Review security standards"],
                },
                "category": "governance",
            }
        ]
  1. Register the plugin:
from fcc.plugins.registries import PluginRegistry

plugin_registry = PluginRegistry()
plugin_registry.register(SecurityPersonaPlugin())
  1. Verify the plugin is registered and its personas are accessible:
plugins = plugin_registry.get_by_type("personas")
print(f"Persona plugins: {len(plugins)}")
for p in plugins:
    personas = p.get_personas()
    for persona in personas:
        print(f"  {persona['id']}: {persona['name']}")
  1. Merge plugin personas into the main registry.

Expected Output: The plugin registers successfully. The custom persona appears in the merged registry.

Rubric: - Plugin class correctly defined (2 pts) - Plugin registered without errors (2 pts) - Persona accessible through plugin registry (3 pts) - Persona merged into main registry (3 pts)


Lab 6: Object Model Assessment

Objective: Assess a toy vocabulary model's maturity using the FCC object model tooling.

Prerequisites: Labs 1-3 complete, familiarity with Chapter 10.

Difficulty: Intermediate

Steps

  1. Load the object model assessment data:
from fcc._resources import get_objectmodel_data_dir
import yaml

om_dir = get_objectmodel_data_dir()
  1. Create a toy vocabulary with 5 terms:
vocabulary = {
    "terms": [
        {"id": "T001", "name": "API Gateway", "definition": "Entry point for API requests", "maturity": 3},
        {"id": "T002", "name": "Service Mesh", "definition": "Infrastructure layer for service-to-service communication", "maturity": 2},
        {"id": "T003", "name": "Data Lake", "definition": "Centralized repository for structured and unstructured data", "maturity": 4},
        {"id": "T004", "name": "Event Bus", "definition": "Asynchronous message delivery system", "maturity": 3},
        {"id": "T005", "name": "Feature Store", "definition": "Centralized repository for ML features", "maturity": 1},
    ]
}
  1. Assess each term's maturity against the 5-level scale (Initial, Developing, Defined, Managed, Optimized).

  2. Produce an assessment report:

levels = {1: "Initial", 2: "Developing", 3: "Defined", 4: "Managed", 5: "Optimized"}
for term in vocabulary["terms"]:
    level = levels[term["maturity"]]
    print(f"{term['name']}: {level} (Level {term['maturity']})")
  1. Calculate overall vocabulary maturity as the average score.

Expected Output: An assessment report listing each term with its maturity level. Overall vocabulary maturity score.

Rubric: - Vocabulary created with valid structure (2 pts) - All terms assessed against 5-level scale (3 pts) - Assessment report formatted correctly (3 pts) - Overall maturity calculated (2 pts)


Lab 7: Cross-Vocabulary Mapping

Objective: Create mappings between two vocabularies and assess mapping coverage.

Prerequisites: Lab 6 complete.

Difficulty: Intermediate

Steps

  1. Define a second vocabulary (e.g., TOGAF terms):
togaf_vocab = {
    "terms": [
        {"id": "TG001", "name": "Technology Component", "definition": "A technology element"},
        {"id": "TG002", "name": "Platform Service", "definition": "A shared technology service"},
        {"id": "TG003", "name": "Information System", "definition": "A system that manages information"},
    ]
}
  1. Create mappings between the two vocabularies:
mappings = [
    {"source": "T001", "target": "TG001", "confidence": 0.85, "relationship": "equivalent"},
    {"source": "T002", "target": "TG002", "confidence": 0.70, "relationship": "related"},
    {"source": "T003", "target": "TG003", "confidence": 0.60, "relationship": "broader"},
]
  1. Calculate mapping coverage:
source_mapped = len(set(m["source"] for m in mappings))
total_source = len(vocabulary["terms"])
coverage = source_mapped / total_source * 100
print(f"Mapping coverage: {coverage:.0f}% ({source_mapped}/{total_source})")
  1. Identify unmapped terms and assess mapping quality by average confidence.

Expected Output: Mapping coverage percentage. List of unmapped terms. Average confidence score across all mappings.

Rubric: - Second vocabulary defined correctly (2 pts) - Mappings created with valid structure (3 pts) - Coverage calculated correctly (2 pts) - Unmapped terms identified (1 pt) - Average confidence computed (2 pts)


Lab 8: Governance Setup

Objective: Configure constitutions, quality gates, and compliance checks for a persona.

Prerequisites: Labs 1 and 5 complete, familiarity with Chapter 9.

Difficulty: Advanced

Steps

  1. Add a constitution to your Test Specialist persona:
doc_context:
  constitution:
    hard_stop:
      - "Never approve code with zero test coverage"
      - "Never skip security-related test cases"
    mandatory:
      - "All test reports must include coverage metrics"
      - "Regression tests required for all bug fixes"
    preferred:
      - "Use property-based testing for data transformations"
      - "Maintain test execution time under 5 minutes"
  1. Load the constitution through the registry:
from fcc.governance.constitution_registry import ConstitutionRegistry, PersonaConstitution
from fcc.personas.registry import PersonaRegistry

registry = PersonaRegistry.from_yaml_files(["my_personas.yaml"])
const_registry = ConstitutionRegistry.from_persona_registry(registry)
const = const_registry.get("TST")
print(f"Hard-stop rules: {len(const.hard_stop_rules)}")
print(f"Mandatory patterns: {len(const.mandatory_patterns)}")
print(f"Total rules: {const.total_rules}")
  1. Convert to a ConstitutionTierModel and inspect the generated rule IDs:
tier_model = const.to_tier_model()
for rule in tier_model.rules:
    print(f"  {rule.id}: {rule.name} (tier {rule.tier})")
  1. Define a quality gate for the persona:
gate = {
    "id": "QG-TST-001",
    "name": "Test Coverage Completeness",
    "persona_id": "TST",
    "threshold": 0.90,
    "checks": ["coverage_report", "regression_suite", "security_tests"],
}
  1. Validate the gate against the quality gate schema.

Expected Output: Constitution loaded with correct rule counts. Tier model contains correctly ID-prefixed rules. Quality gate defined and validated.

Rubric: - Constitution YAML correct with all 3 tiers (3 pts) - ConstitutionRegistry loads successfully (2 pts) - Tier model generates correct rule IDs (2 pts) - Quality gate defined with valid structure (2 pts) - Gate validated against schema (1 pt)


Lab 9: Collaboration Session

Objective: Run a full human-in-the-loop collaboration session with turns, gates, and scoring.

Prerequisites: Labs 1-4 and 8 complete, familiarity with Chapter 8.

Difficulty: Advanced

Steps

  1. Create a collaboration engine with an event bus:
from fcc.collaboration.engine import CollaborationEngine
from fcc.messaging.bus import EventBus

bus = EventBus()
engine = CollaborationEngine(event_bus=bus)
  1. Create a session with two approval gates:
from fcc.collaboration.models import ApprovalGate, HandoffProtocol

session = engine.create_session(
    workflow_id="base_5",
    participants=("human-reviewer", "RC", "BC"),
    gates=(
        ApprovalGate(gate_id="g-find", workflow_node_id="find-1", required_score=3.0),
        ApprovalGate(gate_id="g-create", workflow_node_id="create-1", required_score=3.5),
    ),
    handoff_protocol=HandoffProtocol(max_consecutive_agent_turns=2),
)
  1. Start the session and add turns:
engine.start_session(session.session_id)
engine.add_turn(session.session_id, "agent", "RC", "Research findings compiled")
engine.add_turn(session.session_id, "human", "human-reviewer", "Approved with minor notes")
  1. Evaluate a gate:
decision, score = engine.evaluate_gate(session.session_id, "g-find", 4.0, "human-reviewer")
print(f"Gate decision: {decision.value}, Score: {score.score}")
  1. Complete the session and save the recording:
final = engine.complete_session(session.session_id)
print(f"Final status: {final.status.value}")
print(f"Total turns: {len(final.turns)}")

from fcc.collaboration.recording import SessionRecorder
recorder = SessionRecorder()
recorder.save(final, "session_recording.json")

Expected Output: Session completes with COMPLETED status. Two turns recorded. Gate evaluated with APPROVED decision. Session saved to JSON.

Rubric: - Session created with correct configuration (2 pts) - Turns added in correct sequence (2 pts) - Gate evaluation produces correct decision (2 pts) - Session completes successfully (2 pts) - Recording saved and loadable (2 pts)


Lab 10: Capstone — Full Pipeline

Objective: Combine personas, workflows, simulation, governance, and collaboration into a complete end-to-end pipeline.

Prerequisites: All previous labs complete.

Difficulty: Advanced

Steps

  1. Set up the persona registry with at least 3 personas (use core personas plus your custom Test Specialist from Lab 1).

  2. Configure governance with constitutions and quality gates for all personas (from Lab 8).

  3. Create a workflow using the extended 20-node graph.

  4. Set up the event bus with subscribers for simulation, collaboration, and observability events.

  5. Run a mock simulation through the workflow, capturing traces and events.

  6. Create a collaboration session with approval gates at key transitions.

  7. Evaluate deliverables at each gate using the scoring engine.

  8. Track progress using the ProgressTracker.

  9. Save the complete session using the SessionRecorder.

  10. Generate a summary report by replaying the session through the event bus.

Integration Checklist

  • Persona registry loaded with 3+ personas
  • Constitutions defined for all personas
  • Quality gates configured for key transitions
  • Event bus capturing events from all subsystems
  • Simulation trace generated with correct node count
  • Collaboration session created with gates
  • All gates evaluated with scoring
  • Progress tracked from 0% to 100%
  • Session saved to JSON
  • Summary generated from replay

Expected Output: A complete pipeline that demonstrates the interaction between all FCC subsystems. A JSON session recording. An event log showing the full lifecycle.

Rubric: - Persona setup correct (1 pt) - Governance configured (1 pt) - Workflow loaded (1 pt) - Event bus wired correctly (1 pt) - Simulation runs end-to-end (2 pts) - Collaboration session with gates (2 pts) - Progress tracking works (1 pt) - Session saved and loadable (1 pt)

Key Takeaways

  • Labs progress from basic persona creation to full pipeline integration.
  • Each lab builds on skills from previous labs and guidebook chapters.
  • The capstone lab demonstrates how all FCC subsystems work together.
  • Use fcc validate frequently during labs to catch issues early.
  • Lab data files are available in src/fcc/data/docs/lab_exercises.yaml for programmatic access.