Skip to content

Challenge Prompts

Fifteen self-assessment challenges organized by difficulty. Each challenge includes a description, hints for getting started, and criteria for verifying your solution.


Difficulty Levels

  • Beginner (1--5): Focused on using existing FCC components with minimal code
  • Intermediate (6--10): Require combining multiple modules and understanding interactions
  • Advanced (11--15): Demand custom implementations, cross-module integration, and original design

Beginner Challenges

Challenge 1: Persona Scavenger Hunt

Task: Using only the PersonaRegistry API, find and list every persona whose archetype name contains the word "The". Group them by FCC phase (Find, Create, Build, Critique, Ops, Orchestration).

Hints: - Load the registry with PersonaRegistry.from_yaml_directory(get_personas_dir()) - Access each persona's archetype through persona.riscear.archetype - Use registry.all() to iterate over all personas

Verify: Your output should list 102 personas grouped into 6 phase buckets. Every persona should have an archetype that starts with "The".


Challenge 2: Event Bus Listener

Task: Subscribe to the EventBus, run a mock simulation, and count how many events of each type are emitted during a single scenario execution.

Hints: - Create an EventBus and subscribe with a wildcard filter - Use SimulationEngine in mock mode with EventBus integration - Maintain a dictionary mapping EventType to count

Verify: You should see events from at least 10 different event types. The total event count should exceed 20 for even a simple scenario.


Challenge 3: Quality Gate Explorer

Task: Load the quality gates from quality_gates.yaml and create a report showing which gates apply to each persona category. Display as a table with categories as rows and gate names as columns.

Hints: - Use fcc._resources to locate quality_gates.yaml - Parse the YAML and iterate over gate definitions - Match gates to categories using the gate's applies_to field

Verify: Your table should have 20 rows (one per category) and at least 30 columns (one per gate). Most cells will be empty -- gates are category-specific.


Challenge 4: Cross-Reference Mapper

Task: Using the CrossReferenceMatrix, find the persona with the most upstream dependencies and the persona with the most downstream consumers.

Hints: - Load cross-references with CrossReferenceMatrix.from_yaml() - Use matrix.upstream(persona_id) and matrix.downstream(persona_id) - Compare counts across all 102 personas

Verify: Champions typically have the most orchestration relationships. Core personas like BC and DE should have many upstream and downstream connections.


Challenge 5: Simulation Trace Reader

Task: Run a mock simulation for scenario GEN-001 and write the trace to a JSON file. Then read the JSON file back and print a summary showing each step's persona, action, and duration.

Hints: - Use SimulationEngine.run_workflow() with mode="mock" - Access trace.steps to iterate over simulation steps - Use json.dumps() with default=str for serialization

Verify: The trace should contain at least 5 steps corresponding to the 5-node base workflow. Each step should have a persona ID, action type, and timing data.


Intermediate Challenges

Challenge 6: Custom Scorer Plugin

Task: Implement a custom ScorerPlugin that evaluates document completeness by checking for required sections (Introduction, Methods, Results, Discussion). Register it and use it in a collaboration session.

Hints: - Extend ScorerPlugin from fcc.plugins.base - Implement score(deliverable) to check for section headers - Use CollaborationEngine to run a session with your scorer

Verify: Your scorer should return a score between 0.0 and 1.0. A document with all four sections should score 1.0. Missing sections should proportionally reduce the score.


Challenge 7: Persona Dimension Radar Chart

Task: Load the dimension profiles for three personas (RC, BC, DE) and generate a text-based comparison showing each persona's strongest and weakest dimensions across all 9 categories.

Hints: - Use DimensionRegistry to load dimension definitions - Access persona dimension profiles through the registry - Compare dimension values across the 56 dimensions

Verify: Each persona should have a distinct profile. RC should score high on research-oriented dimensions. BC should score high on design-oriented dimensions. DE should score high on quality-oriented dimensions.


Challenge 8: Event Bus Replay

Task: Capture all events from a simulation run, serialize them to a file, then replay them on a new EventBus instance. Verify that the replayed events match the originals.

Hints: - Use EventSerializer to serialize events to JSON - Use EventReplay to replay from the serialized file - Compare event counts and types between original and replay

Verify: The replayed event stream should contain exactly the same number of events, in the same order, with the same types and payloads as the original.


Challenge 9: Multi-Persona RAG Pipeline

Task: Build a RAG pipeline that accepts a question, determines which persona is best suited to answer it (using the persona search index), and then retrieves relevant documents using that persona's context.

Hints: - Combine PersonaSearchIndex and RAGPipeline - First search for the best persona, then use it for persona-aware retrieval - Compare results with and without persona context

Verify: The persona-aware query should return more relevant results than a generic query. The selected persona should match the domain of the question.


Challenge 10: Constitution Tier Enforcer

Task: Build a workflow that runs a simulation and halts immediately if any Tier 1 (hard-stop) constitution rule is violated. Log all Tier 2 violations with remediation deadlines. Record Tier 3 deviations as advisory notes.

Hints: - Use ConstitutionRegistry to load constitution rules - Subscribe to simulation events and check rules at each step - Use the EventBus to emit violation events

Verify: A simulation that intentionally violates a Tier 1 rule should halt before the next step. Tier 2 violations should be logged but not halt the simulation. Tier 3 deviations should appear only in the advisory log.


Advanced Challenges

Challenge 11: Federated Knowledge Graph Builder

Task: Create knowledge graphs for two separate domains (e.g., ML and Governance), register them in the federation registry, and implement cross-namespace entity resolution to find corresponding personas across domains.

Hints: - Build two KnowledgeGraph instances with different namespaces - Use FederatedKnowledgeGraph to combine them - Implement entity resolution using EntityResolver

Verify: Cross-namespace queries should return results from both domains. Entity resolution should correctly identify that DGS in the governance domain corresponds to governance-related nodes in the ML domain's quality processes.


Challenge 12: Custom Workflow Graph

Task: Design a 15-node workflow graph for a software security audit that uses personas from governance, DevOps, and integration categories. Define the graph in JSON, load it, and run a simulation.

Hints: - Study the existing workflow graph JSON format in src/fcc/data/workflows/ - Include at least three parallel branches that converge at a review gate - Use WorkflowGraph.from_json() to load your graph

Verify: The workflow should execute all 15 nodes, respect the branch topology, and converge correctly. The simulation trace should show parallel execution paths.


Challenge 13: Full Documentation Generator

Task: Using the DocGenerator, generate complete documentation for a custom persona team of 5 personas. The output should include R.I.S.C.E.A.R. specification pages, cross-reference diagrams, evolution guides, and ecosystem prompts.

Hints: - Use DocGenerator from fcc.scaffold.doc_generator - The generator produces 56 files per persona - Customize the Jinja2 templates in src/fcc/templates/docs/

Verify: The output directory should contain 280+ files (56 per persona times 5 personas). Each file should render without Jinja2 errors and contain valid Markdown.


Challenge 14: Observability Dashboard

Task: Instrument a multi-persona simulation with the FCC observability layer. Capture span data and metrics, export them to JSON, and build a text-based dashboard showing latency percentiles, error rates, and throughput per persona.

Hints: - Use instrument_simulation_engine() from fcc.observability.integration - Collect SpanData and MetricPoint objects during the simulation - Calculate P50, P95, and P99 latencies from span durations

Verify: Your dashboard should display metrics for each persona involved in the simulation. Latency percentiles should be numerically correct. Error rate should be 0% for mock simulations.


Challenge 15: End-to-End Integration Test

Task: Build a complete integration that chains all major FCC subsystems: load personas, build a knowledge graph, index it for search, create a RAG pipeline, run a simulation with event bus and observability, enforce constitutions, and generate documentation -- all in a single script.

Hints: - This is the capstone challenge -- combine everything from challenges 1--14 - Structure the script as a pipeline with clear phase boundaries - Use the event bus to coordinate between subsystems

Verify: The script should complete without errors, produce a knowledge graph with 100+ nodes, a searchable index, a RAG pipeline with indexed chunks, a simulation trace, exported observability data, a constitution compliance report, and generated documentation files. Total execution time should be under 60 seconds in mock mode.