Skip to content

Persona Composition Patterns

Duration: 60 minutes Level: Advanced Modules: fcc.personas, fcc.workflow, fcc.messaging, fcc.collaboration

This tutorial presents 8 reusable composition patterns for orchestrating multi-persona workflows in the FCC framework. Each pattern describes when to use it, which personas fit, and how to implement it with FCC primitives.

Prerequisites

  • Completed beginner/intermediate tutorials
  • Familiarity with PersonaRegistry, workflow graphs, event bus, and simulation engine
  • Understanding of champion personas and the FCC cycle

Overview of Patterns

# Pattern When to Use Complexity
1 Sequential Chain Linear pipeline with clear handoffs Low
2 Parallel Fan-out Independent tasks that can run concurrently Medium
3 Hub-and-Spoke Champion coordinating specialized team Medium
4 Feedback Loop Iterative refinement until quality threshold Medium
5 Governance Gate Compliance checkpoints in a pipeline Medium
6 Cross-Domain Bridge Spanning multiple persona categories High
7 Federated Team Cross-project collaboration High
8 Layered Review Multi-tier quality assurance High

Pattern 1: Sequential Chain

Description: Personas execute in strict sequence. Each persona's output becomes the next persona's input. This is the simplest composition pattern and mirrors a traditional pipeline.

When to use: Tasks with clear, non-overlapping phases where each phase depends on the previous one's output.

Persona examples: RC (Research Crafter) -> BC (Blueprint Crafter) -> DE (Documentation Evangelist)

from fcc.simulation.engine import SimulationEngine
from fcc.simulation.messages import SimulationMessage

# Define the sequential chain
chain = ["RC", "BC", "DE"]

engine = SimulationEngine(registry=registry, mode="deterministic")

# Execute each persona in sequence, passing output forward
context = {"topic": "Data governance framework"}
for persona_id in chain:
    message = SimulationMessage(
        sender="orchestrator",
        receiver=persona_id,
        content=f"Process: {context}",
        phase="create",
    )
    result = engine.step(message)
    context["last_output"] = result.content
    print(f"  {persona_id}: {result.content[:80]}...")

Pattern 2: Parallel Fan-out

Description: Multiple personas execute simultaneously on the same input, and their outputs are collected and merged. This pattern maximizes throughput when tasks are independent.

When to use: Comparative analysis, multi-perspective review, or any task where multiple independent viewpoints are valuable.

Persona examples: MAR (Model Architect), NNS (Neural Network Specialist), GBT (Gradient Boosted Trees), RFS (Random Forest Specialist) -- all evaluating the same dataset.

import concurrent.futures
from fcc.messaging.bus import EventBus
from fcc.messaging.events import Event, EventType

bus = EventBus()
parallel_personas = ["MAR", "NNS", "GBT", "RFS"]
input_data = {"dataset": "customer_churn", "target": "churn_flag"}

def run_persona(persona_id: str) -> dict:
    message = SimulationMessage(
        sender="orchestrator",
        receiver=persona_id,
        content=f"Evaluate model approach for: {input_data}",
        phase="create",
    )
    result = engine.step(message)
    return {"persona": persona_id, "output": result.content}

# Fan out and collect
results = [run_persona(pid) for pid in parallel_personas]
for r in results:
    print(f"  {r['persona']}: {r['output'][:80]}...")

# Merge outputs
bus.publish(Event(
    event_type=EventType.SIMULATION_STEP_COMPLETED,
    source="parallel_fanout",
    payload={"results": results},
))

Pattern 3: Hub-and-Spoke

Description: A champion persona acts as the hub, delegating tasks to specialized personas (spokes) and consolidating their outputs. The champion maintains overall coherence and makes final decisions.

When to use: Complex tasks requiring coordination across multiple specialties, where a single decision-maker needs to synthesize diverse inputs.

Persona examples: RCHM (Research Crafter Champion) as hub, with RC (Research Crafter), CIA (Catalog Indexer), STE (Semantic Taxonomy Expert) as spokes.

# Champion orchestration pattern
champion_id = "RCHM"
spoke_ids = ["RC", "CIA", "STE"]

# Champion sends tasks to each spoke
spoke_results = {}
for spoke_id in spoke_ids:
    message = SimulationMessage(
        sender=champion_id,
        receiver=spoke_id,
        content="Conduct research analysis on metadata standards",
        phase="find",
    )
    result = engine.step(message)
    spoke_results[spoke_id] = result.content

# Champion synthesizes
synthesis_message = SimulationMessage(
    sender="orchestrator",
    receiver=champion_id,
    content=f"Synthesize findings: {spoke_results}",
    phase="create",
)
final = engine.step(synthesis_message)
print(f"Champion synthesis: {final.content[:200]}...")

Pattern 4: Feedback Loop

Description: Output cycles back through the Find-Create-Critique phases until a quality threshold is met or a maximum iteration count is reached. Each iteration refines the deliverable.

When to use: Tasks where initial output quality is uncertain and iterative improvement is expected, such as document drafting, code review, or design refinement.

Persona examples: RC (Find) -> BC (Create) -> DE (Critique) -> repeat until quality >= 0.8.

from fcc.collaboration.scoring import ScoringEngine

scorer = ScoringEngine()
quality_threshold = 0.8
max_iterations = 5

deliverable = ""
for iteration in range(max_iterations):
    # Find phase
    find_result = engine.step(SimulationMessage(
        sender="loop", receiver="RC",
        content=f"Research for iteration {iteration + 1}",
        phase="find",
    ))

    # Create phase
    create_result = engine.step(SimulationMessage(
        sender="loop", receiver="BC",
        content=f"Build from: {find_result.content[:200]}",
        phase="create",
    ))
    deliverable = create_result.content

    # Critique phase
    critique_result = engine.step(SimulationMessage(
        sender="loop", receiver="DE",
        content=f"Review: {deliverable[:200]}",
        phase="critique",
    ))

    # Score quality
    score = scorer.score_text(deliverable)
    print(f"  Iteration {iteration + 1}: score={score:.2f}")

    if score >= quality_threshold:
        print(f"  Quality threshold met at iteration {iteration + 1}")
        break

Pattern 5: Governance Gate

Description: A governance persona sits between pipeline stages and must approve the output before processing continues. If the gate rejects, the pipeline either terminates or loops back for remediation.

When to use: Regulated workflows where compliance checkpoints are mandatory, such as privacy assessments, ethics reviews, or audit trails.

Persona examples: Pipeline: POR (Pipeline Orchestrator) -> [GCA (Governance Compliance Auditor)] -> IOR (Inference Orchestrator).

from fcc.collaboration.models import ApprovalGate, ApprovalStatus

# Create a governance gate
gate = ApprovalGate(
    gate_id="compliance_check",
    gate_name="Compliance Review",
    required_approvers=("GCA",),
    status=ApprovalStatus.PENDING,
)

# Pre-gate work
pre_gate_result = engine.step(SimulationMessage(
    sender="pipeline", receiver="POR",
    content="Prepare ML pipeline for deployment",
    phase="create",
))

# Governance review
review_result = engine.step(SimulationMessage(
    sender="pipeline", receiver="GCA",
    content=f"Review for compliance: {pre_gate_result.content[:200]}",
    phase="critique",
))

# Check approval (simulated)
approved = "approved" in review_result.content.lower()
if approved:
    # Post-gate work
    post_gate = engine.step(SimulationMessage(
        sender="pipeline", receiver="IOR",
        content="Deploy approved pipeline",
        phase="create",
    ))
    print(f"Gate passed. Deployment: {post_gate.content[:100]}...")
else:
    print("Gate rejected. Sending back for remediation.")

Pattern 6: Cross-Domain Bridge

Description: Personas from different categories collaborate by having a bridge persona translate concepts and coordinate between domains. This prevents category silos and enables holistic solutions.

When to use: Tasks that span multiple expertise areas, such as building a data pipeline that requires data engineering, ML, and governance personas.

Persona examples: ILS (Integration Specialist) bridges between POR (Pipeline Orchestrator, data_engineering) and MOS (Model Ops Steward, ml_lifecycle).

# Bridge persona translates between domains
bridge_id = "ILS"
source_domain = "POR"  # Data engineering
target_domain = "MOS"  # ML lifecycle

# Source domain produces output
source_result = engine.step(SimulationMessage(
    sender="cross_domain", receiver=source_domain,
    content="Design data pipeline for model training",
    phase="create",
))

# Bridge translates
bridge_result = engine.step(SimulationMessage(
    sender="cross_domain", receiver=bridge_id,
    content=f"Translate for ML ops: {source_result.content[:200]}",
    phase="create",
))

# Target domain consumes translated output
target_result = engine.step(SimulationMessage(
    sender="cross_domain", receiver=target_domain,
    content=f"Operationalize: {bridge_result.content[:200]}",
    phase="create",
))
print(f"Cross-domain result: {target_result.content[:200]}...")

Pattern 7: Federated Team

Description: Personas from different federated projects collaborate through entity resolution and cross-namespace communication. Each project contributes its specialized personas while the federation layer handles vocabulary translation.

When to use: Multi-organization collaborations, cross-project integrations, or when leveraging specialized capabilities from partner projects.

Persona examples: FCC's RC + STC (Standards Compliance) + partner project's "External Analyst" resolved via federation.

from fcc.federation.registry import FederationRegistry
from fcc.federation.namespaces import NamespaceConfig
from fcc.objectmodel.mapping import VocabularyMapping

# Set up federation
federation = FederationRegistry()
federation.add_project("fcc", namespace_config=NamespaceConfig(
    namespace="fcc", prefix="fcc",
    base_uri="https://fcc.example.org/ontology/",
))
federation.add_project("partner", namespace_config=NamespaceConfig(
    namespace="partner", prefix="pp",
    base_uri="https://partner.example.org/ontology/",
))

# Map FCC personas to partner equivalents
federation.entity_resolver.add_mapping(VocabularyMapping(
    source_id="RC", source_name="Research Crafter",
    source_vocabulary="fcc",
    target_id="analyst", target_name="Research Analyst",
    target_vocabulary="partner", similarity_score=0.88,
))

# Resolve and coordinate
resolved = federation.resolve_across_projects("RC", "fcc")
for entity in resolved:
    print(f"  Federated peer: {entity.canonical_id} "
          f"(confidence={entity.confidence:.0%})")

Pattern 8: Layered Review

Description: Output passes through multiple review tiers, with each tier applying progressively stricter or more specialized criteria. Early tiers catch basic issues; later tiers handle nuanced concerns.

When to use: High-stakes deliverables requiring thorough quality assurance, such as regulatory submissions, published research, or production deployments.

Persona examples: Tier 1: DE (Documentation Evangelist, basic quality) -> Tier 2: DQR (Documentation Quality Reviewer, detailed review) -> Tier 3: GCA (Governance Compliance Auditor, compliance).

review_tiers = [
    ("DE", "Basic quality check"),
    ("DQR", "Detailed documentation review"),
    ("GCA", "Governance compliance audit"),
]

deliverable = "Initial documentation draft..."
tier_results = []

for tier_persona, tier_description in review_tiers:
    result = engine.step(SimulationMessage(
        sender="layered_review",
        receiver=tier_persona,
        content=f"Review (Tier {len(tier_results) + 1} - "
                f"{tier_description}): {deliverable[:200]}",
        phase="critique",
    ))
    tier_results.append({
        "tier": len(tier_results) + 1,
        "persona": tier_persona,
        "description": tier_description,
        "feedback": result.content,
    })
    print(f"  Tier {len(tier_results)}: {tier_persona} - "
          f"{result.content[:80]}...")

# All tiers passed
print(f"Completed {len(tier_results)} review tiers")

Choosing the Right Pattern

Situation Recommended Pattern
Clear sequential steps Sequential Chain
Independent parallel analysis Parallel Fan-out
Team needs coordinator Hub-and-Spoke
Quality-driven refinement Feedback Loop
Compliance requirements Governance Gate
Multi-domain expertise needed Cross-Domain Bridge
Cross-organization work Federated Team
High-stakes deliverables Layered Review

Patterns can be combined. For example, a Hub-and-Spoke with Governance Gates at each spoke, or a Federated Team using Parallel Fan-out within each project.

Summary

In this tutorial you learned 8 composition patterns:

  1. Sequential Chain -- linear pipeline with clear handoffs
  2. Parallel Fan-out -- concurrent execution with merged outputs
  3. Hub-and-Spoke -- champion coordinating specialized team members
  4. Feedback Loop -- iterative refinement until quality threshold
  5. Governance Gate -- compliance checkpoints with approve/reject
  6. Cross-Domain Bridge -- spanning categories with translator personas
  7. Federated Team -- cross-project collaboration via entity resolution
  8. Layered Review -- multi-tier progressive quality assurance

Next Steps