Skip to content

Multi-Persona Pipelines

This tutorial covers the extended 20-node workflow graph, showing how integration specialists enhance the Find phase, governance personas protect the Critique phase, and stakeholder personas ensure content reaches the right audiences.

Loading the Extended Workflow

from fcc.workflow.graph import WorkflowGraph

graph = WorkflowGraph.from_json("data/workflows/extended_sequence.json")

print(f"Title: {graph.meta.title}")
print(f"Nodes: {len(graph)}")
print(f"Edges: {len(graph.edges)}")
print(f"Handoffs: {len(graph.handoffs())}")
print(f"Feedbacks: {len(graph.feedbacks())}")

Output:

Title: FCC Extended Personas Collaboration - 20 Personas
Nodes: 20
Edges: 40
Handoffs: 35
Feedbacks: 5

Compared to the base workflow (5 nodes, 11 edges), the extended workflow has 4x the nodes and nearly 4x the edges.

Integration Specialists in the Find Phase

The extended workflow adds four Find-phase personas that feed enriched data to the core RC and BC personas:

graph TD
    STE[STE: Semantic Taxonomy Engineer] -->|terminology_standards| RC[RC: Research Crafter]
    CIA[CIA: Catalog Indexer Architect] -->|indexed_assets| RC
    RIC[RIC: Research Inventory Crafter] -->|capability_data| CIA
    RIC -->|structured_data| BC[BC: Blueprint Crafter]
    CIA -->|catalog_data| BC
    STE -->|classification_schemas| DGS[DGS: Data Governance Specialist]
    RC -->|research_inventory| BC
    RC -->|requirements| TS[TS: Traceability Specialist]

What Each Integration Specialist Adds

Persona Input to Core Value Added
CIA (Catalog Indexer Architect) Indexed assets to RC, catalog data to BC Makes all documentation searchable and organized
STE (Semantic Taxonomy Engineer) Terminology standards to RC Ensures consistent language across all artifacts
RIC (Research Inventory Crafter) Structured data to BC, capability data to CIA Automates capability matrix creation
TS (Traceability Specialist) Requirements from RC, trace updates to BC Links every requirement to its implementation

Explore these connections in code:

# What feeds into the Research Crafter in the extended graph?
rc_incoming = graph.incoming_edges("RC")
for edge in rc_incoming:
    print(f"  {edge.from_id} -> RC: {edge.label} ({edge.type})")

Output:

  CIA -> RC: indexed_assets (handoff)
  STE -> RC: terminology_standards (handoff)
  RB -> RC: operational_findings (feedback)
  UG -> RC: user_feedback (feedback)

Governance in the Critique Phase

The extended workflow adds a governance pipeline that validates compliance, privacy, and factual accuracy:

graph TD
    DGS[DGS: Data Governance Specialist] -->|governance_context| BC[BC: Blueprint Crafter]
    DGS -->|integration_data| PTE[PTE: Privacy Taxonomy Engineer]
    PTE -->|privacy_compliance| GCA[GCA: Governance Compliance Auditor]
    PTE -->|classification_context| AMS[AMS: Anti-fact Mitigation Specialist]
    AMS -->|validation_results| GCA
    AMS -.->|quality_feedback| BC
    BV[BV: Blueprint Validator] -->|quality_issues| DE[DE: Documentation Evangelist]
    BV -->|compliance_results| GCA
    TS[TS: Traceability Specialist] -->|compliance_gaps| GCA
    GCA -->|governance_status| CO[CO: Collaboration Orchestrator]

The Governance Pipeline

  1. DGS provides governance context to BC during creation and sends integration data to PTE
  2. PTE classifies data by sensitivity level and feeds context to both GCA and AMS
  3. AMS validates AI-generated content against authoritative sources, sends quality feedback back to BC
  4. BV validates blueprint completeness and reports quality issues to DE and compliance results to GCA
  5. TS identifies traceability gaps and reports them to GCA
  6. GCA aggregates all compliance data and produces the governance status report
# What feeds into the Governance Compliance Auditor?
gca_incoming = graph.incoming_edges("GCA")
for edge in gca_incoming:
    print(f"  {edge.from_id} -> GCA: {edge.label}")

Output:

  TS -> GCA: compliance_gaps
  BV -> GCA: compliance_results
  PTE -> GCA: privacy_compliance
  AMS -> GCA: validation_results

Stakeholders Across All Phases

Stakeholder personas handle coordination, metrics, executive communication, and publishing:

graph LR
    GCA -->|governance_status| CO[CO: Collaboration Orchestrator]
    CO -->|status_updates| EC[EC: Executive Communicator]
    SMC[SMC: SAFe Metrics Crafter] -->|metrics| EC
    SMC -->|trends| RS[RS: Roadmap Synchronizer]
    RS -->|roadmap_data| EC
    EC -->|executive_packages| SCP[SCP: Stakeholder Content Publisher]
    DE -->|approved_docs| SCP
    UG -->|user_guides| SCP

The Publishing Pipeline

Three personas feed content into SCP for multi-channel distribution:

  1. EC provides executive summaries and decision briefs
  2. DE provides approved, polished documentation
  3. UG provides user guides and onboarding materials

SCP then distributes these across configured channels (wiki, PDF, intranet, email) with access controls and version tracking.

# What feeds into the Stakeholder Content Publisher?
scp_incoming = graph.incoming_edges("SCP")
for edge in scp_incoming:
    print(f"  {edge.from_id} -> SCP: {edge.label}")

Output:

  EC -> SCP: executive_packages
  DE -> SCP: approved_docs
  UG -> SCP: user_guides

Comparing Traversal Orders

BFS from RC in the extended graph visits many more nodes:

order = graph.bfs_from("RC")
print(f"BFS from RC ({len(order)} nodes):")
for i, node_id in enumerate(order, 1):
    node = graph.get_node(node_id)
    print(f"  {i}. {node_id}: {node.name}")

Topological order shows the natural dependency sequence:

topo = graph.topological_order()
print(f"Topological order ({len(topo)} nodes):")
for i, node_id in enumerate(topo, 1):
    node = graph.get_node(node_id)
    print(f"  {i}. {node_id}: {node.name}")

Running a 20-Node Simulation

from fcc.simulation.engine import SimulationEngine
from fcc.simulation.traces import summarize_traces

engine = SimulationEngine(graph=graph, max_steps=200)
history = engine.run(start_node="RC", initial_payload="Document the platform API")

traces = history.to_traces_dict()
summary = summarize_traces(traces)

print(f"Total events: {summary['event_count']}")
print(f"Unique actors: {summary['unique_actors']}")
print(f"\nActor participation:")
for actor, count in sorted(summary["actor_counts"].items(), key=lambda x: -x[1]):
    print(f"  {actor}: {count} events")

The extended workflow produces significantly more events because each additional persona creates new message paths through the graph.

Incremental Adoption

You do not need to adopt all 20 personas at once. Load specific persona YAML files to build a custom subset:

from fcc.personas.registry import PersonaRegistry

# Start with core + a few integration personas
registry = PersonaRegistry.from_yaml_files(
    "data/personas/core_personas.yaml",
    "data/personas/integration_specialists.yaml",
)

# Check what you loaded
print(f"Loaded: {registry.ids}")
# Core (5) + Integration (7) = 12 personas

Then use the extended workflow graph, which will still function even if not all personas in the registry match nodes in the graph. The simulation engine only activates nodes present in the workflow graph.

Next Steps