Skip to content

Chapter 5: The Workflow System

Overview

The workflow system answers two questions: when do personas act, and what do they do when activated? The "when" is defined by workflow graphs -- directed graphs of nodes and edges. The "what" is defined by action types and executed by the action engine.

This chapter covers both halves and the machinery that connects them.

The flowchart below shows a two-cycle Find-Create-Critique loop with explicit feedback and pass edges, ending in an approved deliverable.

flowchart LR
    F1[Find]:::find --> C1[Create]:::create
    C1 --> CR1[Critique]:::critique
    CR1 -->|feedback| F1
    CR1 -->|pass| F2[Find]:::find
    F2 --> C2[Create]:::create
    C2 --> CR2[Critique]:::critique
    CR2 -->|feedback| C1
    CR2 -->|approve| D[Deliverable]:::done

    classDef find fill:#4CAF50,color:#fff
    classDef create fill:#2196F3,color:#fff
    classDef critique fill:#FF9800,color:#fff
    classDef done fill:#9C27B0,color:#fff

This dual-feedback shape is what lets the same graph accommodate both narrow revisions ("tweak this block of code") and structural ones ("rethink the scope").

The second flowchart drills into ActionEngine.run() itself, showing how a single persona activation flows from event emission through prompt generation to result capture.

flowchart TB
    subgraph ActionEngine["ActionEngine.run() Lifecycle"]
        direction TB
        E1[Emit action.started] --> R[Resolve Persona + Action]
        R --> P[Generate Prompts from R.I.S.C.E.A.R.]
        P --> X{AI Client?}
        X -->|Yes| AI[Send to AI Provider]
        X -->|No| M[Return Mock Result]
        AI --> AR[ActionResult]
        M --> AR
        AR --> E2[Emit action.completed]
    end

Because the mock branch produces the same ActionResult shape as the AI branch, deterministic tests and production traces look identical to downstream consumers.

Workflow Graphs

A workflow graph is a JSON file containing three sections: meta (identity), nodes (persona activation points), and edges (connections between nodes). The WorkflowGraph class in src/fcc/workflow/graph.py loads, validates, and traverses these graphs.

The Four Graph Sizes

The framework ships four built-in graphs, each suited to a different scale of engagement:

Graph File Nodes Edges Use Case
Base sequence base_sequence.json 5 ~6 Quick prototyping and tutorials
Extended sequence extended_sequence.json 20 ~30 Standard multi-persona projects
Complete complete_24.json 24 ~40 Full governance with all core personas
Extended-84 extended_84.json 55 ~90 Enterprise-scale with all 102 core personas

Two additional EAIFC graphs (solution-level) support cross-project orchestration where multiple FCC instances coordinate.

Graph Structure

Each graph is a JSON document:

{
  "meta": {
    "id": "base_sequence",
    "title": "Base FCC Sequence",
    "description": "Minimal 5-node Find-Create-Critique cycle"
  },
  "nodes": [
    {"id": "N1", "name": "Requirements Gathering", "type": "find"},
    {"id": "N2", "name": "Architecture Design", "type": "create"},
    {"id": "N3", "name": "Implementation", "type": "create"},
    {"id": "N4", "name": "Quality Review", "type": "critique"},
    {"id": "N5", "name": "Final Approval", "type": "critique"}
  ],
  "edges": [
    {"from_id": "N1", "to_id": "N2", "type": "handoff"},
    {"from_id": "N2", "to_id": "N3", "type": "handoff"},
    {"from_id": "N3", "to_id": "N4", "type": "handoff"},
    {"from_id": "N4", "to_id": "N5", "type": "handoff"},
    {"from_id": "N4", "to_id": "N2", "type": "feedback"}
  ]
}

Nodes have a type field that maps to the FCC phase (find, create, critique). Edges have a type field that is either handoff (forward flow) or feedback (backward flow for iterative refinement).

Loading and Validation

from fcc.workflow.graph import WorkflowGraph

# Simple load
graph = WorkflowGraph.from_json("src/fcc/data/workflows/base_sequence.json")

# Schema-validated load
graph = WorkflowGraph.from_json_validated(
    "src/fcc/data/workflows/base_sequence.json",
    "src/fcc/data/schemas/workflow_schema.json",
)

The validated loader uses jsonschema.validate() to ensure the JSON conforms to the workflow schema before parsing.

Adjacency Queries

Once loaded, the graph supports rich adjacency queries:

# Direct successors and predecessors
successors = graph.successors("N2")   # nodes reachable from N2
predecessors = graph.predecessors("N4")  # nodes with edges into N4

# Edge-level queries
outgoing = graph.outgoing_edges("N3")
incoming = graph.incoming_edges("N4")

# Filter by edge type
handoffs = graph.handoffs()    # all forward-flow edges
feedbacks = graph.feedbacks()  # all backward-flow edges

BFS Traversal Algorithm

The bfs_from() method performs a breadth-first search from any starting node, following all edge types:

order = graph.bfs_from("N1")
# ['N1', 'N2', 'N3', 'N4', 'N5']

The implementation is straightforward:

def bfs_from(self, start_id: str) -> list[str]:
    visited: set[str] = set()
    queue = [start_id]
    order: list[str] = []
    while queue:
        nid = queue.pop(0)
        if nid in visited:
            continue
        visited.add(nid)
        order.append(nid)
        for edge in self._outgoing.get(nid, []):
            if edge.to_id not in visited:
                queue.append(edge.to_id)
    return order

BFS is used by the simulation engine to determine the activation order of personas across the graph. The topological_order() method provides an alternative ordering using Kahn's algorithm, considering only handoff edges (ignoring feedback cycles):

topo = graph.topological_order()
# Deterministic ordering respecting handoff dependencies

The 6 Action Types

Each persona can perform up to 6 action types, defined by the WorkflowActionType enum:

Action Type Purpose
SCAFFOLD Generate new artifacts from scratch
REFACTOR Improve existing artifacts while preserving correctness
DEBUG Diagnose and fix issues, identify root causes
TEST Validate quality with comprehensive test suites
COMPARE Evaluate alternatives across trade-off dimensions
DOCUMENT Generate comprehensive documentation

Action types are orthogonal to the workflow graph. A persona activated at a "create" node might perform a SCAFFOLD action for new work or a REFACTOR action for existing work. The specific action is chosen at execution time.

312 Action Definitions

Across the 102 core personas, there are 312 action definitions stored in YAML files under src/fcc/data/personas/actions/. Each definition specifies the persona ID, action type, description, execution steps, inputs, outputs, constraints, and examples.

actions:
  - persona_id: SQC
    action_type: test
    description: >-
      Generate comprehensive quality test suites for the
      artifacts under review.
    execution_steps:
      - "Identify all testable quality attributes"
      - "Generate test cases per quality gate"
      - "Define expected outcomes and severity thresholds"
    inputs:
      - "Artifact under review"
      - "Quality gate definitions"
    outputs:
      - "Test suite with per-gate assertions"
      - "Coverage report mapping tests to gates"
    constraints:
      - "Every hard-stop gate must have at least one test"
    examples:
      - "test_security_gates.py"

The WorkflowActionRegistry loads these definitions and indexes them by (persona_id, action_type):

from fcc.workflow.actions import WorkflowActionRegistry, WorkflowActionType

registry = WorkflowActionRegistry.from_yaml_directory(
    "src/fcc/data/personas/actions/"
)

# Query
action = registry.get("SQC", WorkflowActionType.TEST)
print(action.description)
print(action.execution_steps)

# List all actions for a persona
all_sqc = registry.for_persona("SQC")

# List all scaffold actions across all personas
scaffolds = registry.for_type(WorkflowActionType.SCAFFOLD)

ActionEngine.run() Lifecycle

The ActionEngine in src/fcc/workflow/action_engine.py ties everything together. Here is the full lifecycle of a single run() call:

Step 1: Event Emission

The engine publishes an action.started event to the event bus (if connected).

Step 2: Resolution

The engine resolves the PersonaSpec from the PersonaRegistry and the WorkflowAction from the WorkflowActionRegistry using the provided persona_id and action_type.

Step 3: Prompt Generation

get_action_prompt() builds a two-part prompt from the R.I.S.C.E.A.R. specification and action definition:

  • System prompt: persona identity (name, role_title, role, archetype, style), action preamble, constraints (both persona-level and action-level), expected outputs, and constitution rules from doc_context.
  • User prompt: execution instruction, numbered execution steps, required inputs, and expected outputs.
from fcc.workflow.action_engine import get_action_prompt

prompts = get_action_prompt(persona, action)
print(prompts["system"])  # persona identity + constraints
print(prompts["user"])    # execution steps + I/O

Step 4: Execution

If no AI client is configured, the engine returns a deterministic mock result containing the action metadata. This enables testing and prototyping without API keys.

If an AI client is present, the engine sends the system and user prompts to the client and wraps the response in an ActionResult.

Step 5: Result

from fcc.workflow.action_engine import ActionEngine, ActionResult

engine = ActionEngine(persona_registry, action_registry)
result = engine.run("SQC", WorkflowActionType.TEST)

print(result.persona_id)    # "SQC"
print(result.action_type)   # WorkflowActionType.TEST
print(result.content)       # generated content or mock
print(result.success)       # True/False
print(result.metadata)      # {"mode": "mock"} or {"mode": "ai", ...}

Step 6: Completion Event

The engine publishes an action.completed or action.failed event.

Prompt Generation Details

The six action types each have a dedicated preamble injected into the system prompt:

_ACTION_PREAMBLES = {
    SCAFFOLD: "Generate new artifacts from scratch following best practices...",
    REFACTOR: "Improve and modernize existing artifacts while preserving correctness...",
    DEBUG:    "Diagnose and fix issues, identifying root causes...",
    TEST:     "Validate quality against constraints by generating comprehensive test suites...",
    COMPARE:  "Evaluate alternatives by analyzing trade-offs...",
    DOCUMENT: "Generate comprehensive documentation including purpose, usage...",
}

The preamble is concatenated with the persona's R.I.S.C.E.A.R. fields to produce a system prompt that is both persona-specific and action-specific.

ActionResult Structure

ActionResult is a mutable dataclass (not frozen, since the engine builds it incrementally):

@dataclass
class ActionResult:
    persona_id: str
    action_type: WorkflowActionType
    content: str
    success: bool = True
    error: str | None = None
    metadata: dict[str, Any] = field(default_factory=dict)

The metadata dict carries execution context -- mode ("mock" or "ai"), model, provider, and latency_ms for AI-backed runs.

Key Takeaways

  • Workflow graphs define activation order via nodes (persona activation points) and edges (handoff/feedback).
  • Four built-in graph sizes range from 5 nodes (prototyping) to 55 nodes (enterprise-scale).
  • BFS and topological traversal determine persona activation order.
  • Six action types (scaffold, refactor, debug, test, compare, document) define what personas do.
  • 204 action definitions are loaded from YAML and indexed by (persona_id, action_type).
  • ActionEngine.run() resolves persona + action, generates prompts from R.I.S.C.E.A.R., executes (mock or AI), and emits events.
  • Prompt generation combines persona identity, action preamble, constraints, and constitution rules.

Previous: Chapter 4 -- Persona Dimensions | Next: Chapter 6 -- Plugin Architecture

Try this in Notebook 05