Skip to content

Workflow Recipes

Twelve multi-step prompt recipes for common FCC workflows. Each recipe specifies the steps, the personas involved, and a code snippet to get started.


Table of Contents

  1. Full Find-Create-Critique Cycle
  2. ML Model Evaluation Pipeline
  3. Governance Audit Workflow
  4. Knowledge Graph Construction
  5. Multi-Persona Literature Review
  6. API Specification Workflow
  7. Privacy Impact Assessment
  8. Cross-Project Federation Setup
  9. Champion-Led Research Package
  10. Deployment Readiness Review
  11. Open Science Publication Pipeline
  12. Custom Persona Design Sprint

1. Full Find-Create-Critique Cycle

The canonical FCC workflow using the three core phases.

Personas: RC, BC, DE

Steps:

  1. RC gathers research into a capability matrix and traceability matrix
  2. BC transforms research into blueprints, API specs, and data models
  3. DE reviews all deliverables against quality gates and style guides
  4. DE provides feedback to RC and BC for iteration
from fcc.simulation.engine import SimulationEngine
from fcc.personas.registry import PersonaRegistry
from fcc._resources import get_personas_dir

registry = PersonaRegistry.from_yaml_directory(get_personas_dir())
engine = SimulationEngine(registry=registry, mode="mock")

trace = engine.run_workflow(
    workflow_id="base_fcc",
    scenario_id="GEN-001",
    personas=["RC", "BC", "DE"]
)
print(f"Workflow completed: {len(trace.steps)} steps")

2. ML Model Evaluation Pipeline

End-to-end model development from data sourcing through operations.

Personas: DSS, FAR, MAR, ESC, IOR, IRE, IAN, MOS

Steps:

  1. DSS discovers and evaluates data sources with provenance checks
  2. FAR designs feature store and transformation pipelines
  3. MAR selects architecture and defines training strategy
  4. ESC manages experiment tracking and hyperparameter sweeps
  5. IOR optimizes the trained model for inference (quantization, distillation)
  6. IRE generates explanations (SHAP, LIME, attention maps)
  7. IAN measures real-world impact and monitors drift
  8. MOS deploys to model registry with A/B test configuration
engine = SimulationEngine(registry=registry, mode="mock")
trace = engine.run_workflow(
    workflow_id="extended_84",
    scenario_id="ML-001",
    personas=["DSS", "FAR", "MAR", "ESC", "IOR", "IRE", "IAN", "MOS"]
)

3. Governance Audit Workflow

Comprehensive governance review using auditor and compliance personas.

Personas: GCA, DGS, PTE, AMS, KVC

Steps:

  1. GCA defines audit scope and selects constitution tiers to evaluate
  2. DGS reviews API contracts and data flow compliance
  3. PTE classifies data elements against privacy taxonomies
  4. AMS validates content for factual accuracy (anti-hallucination)
  5. KVC performs key-value compliance checks
  6. GCA compiles audit report with findings and remediation plan
from fcc.governance.constitution_registry import ConstitutionRegistry

const_registry = ConstitutionRegistry.load_default()
constitution = const_registry.get_constitution("GCA")
print(f"GCA constitution has {len(constitution.rules)} rules across {len(constitution.tiers)} tiers")

4. Knowledge Graph Construction

Build a domain knowledge graph from personas, actions, and artifacts.

Personas: OA, KB, SDE, STE

Steps:

  1. OA designs the ontology schema (classes, properties, constraints)
  2. STE builds taxonomy hierarchies and concept relationships
  3. KB populates the graph with persona nodes, action edges, and artifact nodes
  4. SDE implements serializers and SPARQL query interfaces
from fcc.knowledge.builders import build_full_fcc_graph
from fcc.knowledge.serializers import OWLSerializer

graph = build_full_fcc_graph()
serializer = OWLSerializer()
owl_output = serializer.serialize(graph)
print(f"Knowledge graph: {len(graph.nodes)} nodes, {len(graph.edges)} edges")

5. Multi-Persona Literature Review

Collaborative research synthesis using Find-phase personas.

Personas: RC, RIC, CIA, STE, RCHM

Steps:

  1. RCHM orchestrates the review by defining scope and assigning sub-tasks
  2. RC gathers primary sources and annotates with capability tags
  3. RIC builds structured research inventories with automated evaluation
  4. CIA creates searchable catalog indexes for all gathered materials
  5. STE organizes findings into a semantic taxonomy
  6. RCHM compiles the research package and hands off to BCHM
engine = SimulationEngine(registry=registry, mode="mock")
trace = engine.run_workflow(
    workflow_id="extended_20",
    scenario_id="RES-001",
    personas=["RCHM", "RC", "RIC", "CIA", "STE"]
)

6. API Specification Workflow

Design and validate API specifications with traceability.

Personas: BC, BV, TS, UMC, BCHM

Steps:

  1. BCHM orchestrates the design sprint
  2. BC creates API specifications with schemas, endpoints, and error handling
  3. UMC produces UI mockups for any developer portal pages
  4. BV validates specifications against quality gates
  5. TS builds a traceability matrix from requirements to API endpoints
  6. BCHM assembles the blueprint package
from fcc.workflow.action_engine import ActionEngine

action_engine = ActionEngine(registry=registry)
result = action_engine.run(
    persona_id="BC",
    action_type="scaffold",
    context={"task": "Design REST API for persona management"}
)
print(result.output)

7. Privacy Impact Assessment

Conduct a privacy review using privacy-focused personas.

Personas: PIA, CRM, DEO, PTE, DGS

Steps:

  1. PIA performs a Privacy Impact Assessment, identifying data flows and risks
  2. PTE classifies all data elements using privacy taxonomies
  3. CRM maps consent requirements to data processing activities
  4. DEO designs de-identification strategies for sensitive fields
  5. DGS validates the complete privacy posture against regulations
engine = SimulationEngine(registry=registry, mode="mock")
trace = engine.run_workflow(
    workflow_id="complete",
    scenario_id="PRI-001",
    personas=["PIA", "CRM", "DEO", "PTE", "DGS"]
)

8. Cross-Project Federation Setup

Establish federation between two projects for shared knowledge.

Personas: OA, SDE, KB

Steps:

  1. Register both project namespaces in the federation registry
  2. OA maps vocabulary terms between projects
  3. SDE builds cross-namespace edges in the federated knowledge graph
  4. KB validates entity resolution confidence scores
  5. Configure the change tracker for ongoing synchronization
from fcc.federation.namespace import NamespaceRegistry
from fcc.federation.resolver import EntityResolver

ns_registry = NamespaceRegistry()
ns_registry.register(namespace="project_a", display_name="Project A", base_uri="https://a.example.org/")
resolver = EntityResolver(namespace_registry=ns_registry)
matches = resolver.resolve("RC", source_namespace="fcc", target_namespace="project_a")

9. Champion-Led Research Package

Use the champion pattern to orchestrate a complete Find-phase delivery.

Personas: RCHM, RC, CIA, STE, RIC

Steps:

  1. RCHM receives the project brief and decomposes it into sub-tasks
  2. RC performs primary research with capability tagging
  3. CIA indexes all gathered artifacts for searchability
  4. STE builds semantic taxonomies from the research
  5. RIC creates structured inventories with evaluation rubrics
  6. RCHM assembles the unified research package and hands off to BCHM
champion = registry.get("RCHM")
print(f"Champion of: {champion.champion_of}")
print(f"Orchestrates: {champion.orchestrates}")

10. Deployment Readiness Review

Pre-deployment validation using DevOps and governance personas.

Personas: PBD, DVE, JUS, GCA, TS

Steps:

  1. PBD reviews pipeline definitions and deployment configurations
  2. DVE validates infrastructure-as-code and environment parity
  3. GCA runs a compliance audit against deployment constitutions
  4. TS verifies traceability from requirements through to deployed artifacts
  5. JUS plans the rollout strategy (canary, blue-green, rolling)
from fcc.workflow.action_engine import ActionEngine

action_engine = ActionEngine(registry=registry)
result = action_engine.run(
    persona_id="DVE",
    action_type="test",
    context={"task": "Validate deployment configuration for production readiness"}
)

11. Open Science Publication Pipeline

Prepare research outputs for open access publication with FAIR compliance.

Personas: FDS, RSN, CSL, OAA, RC

Steps:

  1. RC gathers all research artifacts and experimental data
  2. FDS validates datasets against FAIR principles (Findable, Accessible, Interoperable, Reusable)
  3. RSN verifies computational reproducibility of all results
  4. CSL formats citations and resolves DOIs for all references
  5. OAA guides repository selection, license choice, and embargo policy
engine = SimulationEngine(registry=registry, mode="mock")
trace = engine.run_workflow(
    workflow_id="complete",
    scenario_id="SCI-001",
    personas=["RC", "FDS", "RSN", "CSL", "OAA"]
)

12. Custom Persona Design Sprint

Create a new persona from scratch using FCC's design tools.

Personas: (you are the designer)

Steps:

  1. Define the R.I.S.C.E.A.R. specification (all 10 components)
  2. Create a 56-dimension profile using DimensionRegistry
  3. Add cross-reference matrix entries for upstream and downstream interactions
  4. Assign a constitution from the constitution registry (or create a custom one)
  5. Register the persona as a plugin
  6. Generate documentation using the DocGenerator
  7. Write tests and validate with fcc validate
from fcc.scaffold.cli import cli
# Use the CLI to scaffold a new persona:
# fcc add-persona --id "MYP" --name "My Persona" --category "custom" --phase "Create"
# Then edit the generated YAML file to fill in all R.I.S.C.E.A.R. components.