Chapter 3: Workflow Design¶
Learning Objectives¶
By the end of this chapter you will be able to:
- Construct a custom workflow graph in JSON.
- Use the ActionEngine to execute workflow nodes with specific action types.
- Design feedback edges that create iterative refinement loops.
- Validate workflow graphs against the JSON schema and test them programmatically.
- Choose the right graph size for your project's needs.
The figure below shows the authoring loop for a custom workflow graph: define nodes and edges in JSON, validate against the schema, and register the resulting actions with the WorkflowActionRegistry.
flowchart LR
subgraph CWF["Custom Workflow"]
F[find_requirements<br/>Domain Expert]:::find
C[create_design<br/>Software Architect]:::create
CR[critique_design<br/>Quality Lead]:::critique
F --> C --> CR
CR -.->|feedback| F
end
SCHEMA[workflow_schema.json] -->|validate| CWF
CWF -->|register actions| REG[WorkflowActionRegistry]
classDef find fill:#4CAF50,color:#fff
classDef create fill:#2196F3,color:#fff
classDef critique fill:#FF9800,color:#fff
Always run the workflow schema validator in CI. Catching a malformed graph at build time is cheap; catching it inside a running simulation is expensive and obscures the failure site.
Workflow Graph Anatomy¶
A workflow graph is a JSON document that defines nodes and edges. Each node activates a persona with a specific action type. Each edge defines a handoff between nodes. Here is a minimal 3-node graph:
{
"id": "my_custom_workflow",
"name": "Custom FCC Workflow",
"version": "1.0.0",
"nodes": [
{
"id": "find_requirements",
"persona_id": "domain_expert",
"phase": "find",
"action_type": "scaffold",
"description": "Gather and structure domain requirements"
},
{
"id": "create_design",
"persona_id": "software_architect",
"phase": "create",
"action_type": "scaffold",
"description": "Produce the system design document"
},
{
"id": "critique_design",
"persona_id": "code_reviewer",
"phase": "critique",
"action_type": "compare",
"description": "Review the design against requirements"
}
],
"edges": [
{
"from": "find_requirements",
"to": "create_design",
"type": "forward"
},
{
"from": "create_design",
"to": "critique_design",
"type": "forward"
},
{
"from": "critique_design",
"to": "find_requirements",
"type": "feedback"
}
]
}
Node Fields¶
id: Unique within the graph. Used for edge references and trace identification.persona_id: References a persona in the registry. The persona must exist when the graph is loaded.phase: One offind,create, orcritique. Must be consistent with the persona's primary phase (a warning is issued if they differ, but it is not a hard error -- some personas legitimately operate in multiple phases).action_type: One of the six action types (scaffold,refactor,debug,test,compare,document). The selected persona must have this action type defined in its action registry.description: Human-readable description for dashboards and documentation.
Edge Fields¶
from/to: Node IDs defining the source and target of the handoff.type: One offorward,feedback,parallel, orconditional.
The ActionEngine¶
The ActionEngine (src/fcc/workflow/action_engine.py) is the runtime that executes workflow nodes. When a node is activated, the ActionEngine:
- Loads the persona from the registry.
- Retrieves the action definition from the
WorkflowActionRegistry. - Generates a prompt by combining the persona's R.I.S.C.E.A.R. specification, the action type's template, and the input from upstream nodes.
- Executes the prompt using the configured engine (mock or AI).
- Produces an
ActionResultcontaining the output, metadata, and any quality gate evaluations.
from fcc.workflow.action_engine import ActionEngine
from fcc.workflow.actions import WorkflowActionRegistry
action_registry = WorkflowActionRegistry()
action_registry.load_all()
engine = ActionEngine(action_registry=action_registry, mode="mock")
result = engine.run(
persona_id="software_architect",
action_type="scaffold",
input_context={"requirements": "...the domain expert's output..."},
)
print(result.output)
print(result.metadata)
Prompt Generation¶
The prompt generated for each node follows a structured template:
You are the {persona.name}.
Role: {persona.spec.role}
Style: {persona.spec.style}
Constraints: {persona.spec.constraints}
Your task is to {action_type_description}.
Input:
{input_context}
Expected output format:
{persona.spec.expected_output}
This template ensures that every action execution is grounded in the persona's R.I.S.C.E.A.R. specification. The persona does not decide how to behave -- the specification tells it.
Designing Feedback Loops¶
Feedback edges are the most powerful feature of FCC workflow graphs. They enable iterative refinement without manual intervention. But they require careful design to avoid three pitfalls:
Pitfall 1: Infinite Loops¶
If a Critique node always fails and the feedback edge always triggers, the workflow loops forever. The max_iterations setting in fcc.yaml caps the number of times a feedback edge can fire (default: 3). After the cap, the workflow escalates to a human reviewer.
Pitfall 2: Overly Broad Feedback¶
If a feedback edge goes from a Critique node back to the very first Find node, the entire workflow re-executes on every feedback cycle. This is wasteful. Design your feedback edges to target the most specific node that can address the critique. If the issue is missing data, route back to the Find node. If the issue is poor formatting, route back to the Create node.
Pitfall 3: Missing Escalation¶
Always pair feedback edges with an escalation path. If the feedback loop exhausts its iterations, the workflow must have a way to proceed -- either by escalating to a human, accepting the best-effort output, or routing to an alternative path.
Parallel Branches¶
For workflows where multiple Create tasks are independent, use parallel edges:
{
"edges": [
{"from": "find_data", "to": "create_analysis", "type": "parallel"},
{"from": "find_data", "to": "create_documentation", "type": "parallel"},
{"from": "create_analysis", "to": "critique_all", "type": "forward"},
{"from": "create_documentation", "to": "critique_all", "type": "forward"}
]
}
The workflow engine executes create_analysis and create_documentation concurrently (when the engine supports parallelism) and waits for both to complete before activating critique_all.
Convergence Nodes¶
When parallel branches converge at a single Critique node, that node receives the outputs of all upstream parallel branches as its input. The critique persona must be designed to handle multiple inputs -- its R.I.S.C.E.A.R. specification should state that it expects "outputs from multiple Create-phase personas" as its input.
Conditional Edges¶
Conditional edges activate only when a specified condition is met:
{
"from": "critique_design",
"to": "governance_review",
"type": "conditional",
"condition": {
"field": "critique_result.severity",
"operator": ">=",
"value": "high"
}
}
This edge only fires if the critique result has a severity of "high" or above. Conditional edges are used to route edge cases to specialized handling (governance review, human escalation, security audit) without burdening the normal flow.
Validation and Testing¶
Validate your workflow graph:
The validator checks:
- All node
persona_idvalues reference existing personas. - All node
action_typevalues are valid for the referenced persona. - All edge
fromandtovalues reference existing nodes. - The graph has no unreachable nodes (nodes with no incoming edges except the start node).
- Feedback edges do not create cycles that bypass the iteration limit.
Programmatic Testing¶
"""Tests for custom workflow graph."""
import json
from pathlib import Path
def test_workflow_loads():
path = Path("workflows/my_custom_workflow.json")
data = json.loads(path.read_text())
assert "nodes" in data
assert "edges" in data
assert len(data["nodes"]) == 3
def test_all_personas_exist(registry):
path = Path("workflows/my_custom_workflow.json")
data = json.loads(path.read_text())
for node in data["nodes"]:
assert registry.get(node["persona_id"]) is not None
def test_feedback_edge_exists():
path = Path("workflows/my_custom_workflow.json")
data = json.loads(path.read_text())
feedback_edges = [e for e in data["edges"] if e["type"] == "feedback"]
assert len(feedback_edges) >= 1
Choosing Graph Size¶
| Scenario | Recommended Graph | Reason |
|---|---|---|
| Quick prototype | Base (5 nodes) | Fast iteration, minimal overhead |
| Standard project | Extended (20 nodes) | Full FCC cycle with parallel branches |
| Regulated industry | Complete (24 nodes) | Dedicated governance nodes |
| Enterprise team | Extended-84 (55 nodes) | Full persona catalog coverage |
| Custom domain | Custom graph | Tailored to your specific personas and workflow |
Start small and grow. It is easier to add nodes to a working graph than to debug a large graph that has never run.
Key Takeaways¶
- Workflow graphs are JSON documents with nodes (persona activations) and edges (handoffs).
- The ActionEngine executes nodes by combining persona specs with action type templates.
- Feedback edges enable iterative refinement; cap iterations and provide escalation paths.
- Parallel edges support concurrent execution; convergence nodes receive multiple inputs.
- Conditional edges route edge cases to specialized handling.
- Validate graphs against the schema and test them programmatically.
Cross-References¶
- Chapter 4: Simulation and Traces -- execute your workflow graph
- FCC Guidebook, Chapter 5 -- full workflow reference
- Notebook 04: Action Engine -- interactive action execution
- Book 1, Chapter 3: Workflow Thinking -- conceptual foundation
← Chapter 2: Custom Personas | Next: Chapter 4 -- Simulation and Traces →