Feedback Loops¶
This tutorial explains how feedback edges work in FCC workflows, including Critique-to-Find and Critique-to-Create feedback, loop termination conditions, and quality gate thresholds.
What Are Feedback Edges?¶
Feedback edges are directed connections that flow "backward" in the FCC cycle -- from Critique or Create phase personas back to Find or Create phase personas. They are the mechanism by which FCC achieves iterative quality improvement.
In the workflow graph JSON, feedback edges have "type": "feedback" to distinguish them from forward-flow "handoff" edges.
from fcc.workflow.graph import WorkflowGraph
graph = WorkflowGraph.from_json("data/workflows/base_sequence.json")
# List all feedback edges
feedbacks = graph.feedbacks()
print(f"Feedback edges: {len(feedbacks)}")
for f in feedbacks:
print(f" {f.from_id} -> {f.to_id}: {f.label}")
Output:
Feedback edges: 4
DE -> BC: standards_edits
RB -> BC: operational_feedback
RB -> RC: operational_findings
UG -> BC: usability_feedback
UG -> RC: user_feedback
Critique-to-Create Feedback¶
When Critique-phase personas identify issues, they send feedback to Create-phase personas for correction.
DE to BC: Standards Edits¶
The Documentation Evangelist reviews blueprints from BC and sends back style guide violations, formatting corrections, and standards gaps.
Trigger: BC submits blueprints that do not meet documentation standards.
Flow:
graph LR
BC[BC: Blueprint Crafter] -->|blueprints_specs| DE[DE: Documentation Evangelist]
DE -.->|standards_edits| BC
Example: DE finds that an API specification is missing error response schemas. It sends a standards_edits message back to BC with specific remediation instructions.
AMS to BC: Quality Feedback (Extended Workflow)¶
In the extended and complete workflows, the Anti-fact Mitigation Specialist sends quality feedback to BC when content fails confidence thresholds:
graph LR
BC[BC: Blueprint Crafter] --> BV[BV: Blueprint Validator]
AMS[AMS: Anti-fact Mitigation Specialist] -.->|quality_feedback| BC
Create-to-Find Feedback¶
When Create-phase personas discover knowledge gaps during content creation, they send feedback to Find-phase personas.
RB to RC: Operational Findings¶
The Runbook Crafter discovers operational gaps while writing procedures and feeds findings back to the Research Crafter.
Trigger: RB is writing a deployment runbook and realizes the research inventory lacks information about rollback procedures.
Flow:
graph LR
RB[RB: Runbook Crafter] -.->|operational_findings| RC[RC: Research Crafter]
RB -.->|operational_feedback| BC[BC: Blueprint Crafter]
UG to RC: User Feedback¶
The User Guide Crafter identifies user pain points while writing guides and feeds them back to the Research Crafter.
Trigger: UG is writing an onboarding guide and discovers that common error messages are not documented in the research inventory.
How Feedback Works in Simulation¶
During simulation, feedback edges create new messages that re-enter the workflow graph at an earlier node. The simulation engine processes these messages using BFS, which means feedback messages are queued alongside forward-flow messages.
from fcc.simulation.engine import SimulationEngine
from fcc.simulation.traces import load_traces, summarize_traces
graph = WorkflowGraph.from_json("data/workflows/base_sequence.json")
engine = SimulationEngine(graph=graph, max_steps=50, max_history=16)
history = engine.run(start_node="RC", initial_payload="Document the auth API")
# Count events per actor to see feedback activity
summary = summarize_traces(history.to_traces_dict())
print(f"Actor counts:")
for actor, count in summary["actor_counts"].items():
print(f" {actor}: {count} events")
If feedback is working correctly, you should see RC and BC receiving multiple events -- their initial activation plus feedback from downstream personas.
Loop Termination Conditions¶
Without termination conditions, feedback loops would run indefinitely. FCC uses three mechanisms:
1. Maximum Steps (max_steps)¶
The simulation engine enforces a hard limit on total processing steps:
When the step count reaches max_steps, the simulation stops regardless of remaining queued messages.
2. Maximum History (max_history)¶
Each message carries its processing history. When a message has been annotated by more than max_history personas, it stops propagating:
This prevents individual message chains from cycling endlessly. In the simulation engine code:
# From engine.py
if new_msg.history_len <= self.max_history:
queue.append((edge.to_id, new_msg, edge.label))
3. Quality Gate Convergence¶
In practice, quality gates provide a semantic termination condition. Once all quality gates pass:
- No
standards_editsfeedback is generated (DE approves) - No
quality_feedbackis generated (AMS validates) - No
operational_findingsare generated (RB finds no gaps)
This natural convergence means well-configured workflows terminate before hitting max_steps.
Tuning Feedback Behavior¶
Allowing More Iteration¶
If your artifacts need more refinement cycles, increase both limits:
engine = SimulationEngine(
graph=graph,
max_steps=200, # Allow more total events
max_history=32, # Allow longer message chains
)
Restricting Feedback¶
If you want to limit feedback (e.g., for a quick draft), reduce the limits:
engine = SimulationEngine(
graph=graph,
max_steps=20, # Stop quickly
max_history=4, # Very short message chains
)
Analyzing Feedback Depth¶
To understand how deep feedback loops go in your simulation:
data = load_traces("output/trace.json")
events = data["events"]
# Find feedback events
feedback_labels = {"standards_edits", "operational_feedback",
"operational_findings", "usability_feedback",
"user_feedback", "quality_feedback", "trace_updates"}
feedback_events = [e for e in events if e["edge_label"] in feedback_labels]
print(f"Total events: {len(events)}")
print(f"Feedback events: {len(feedback_events)}")
print(f"Feedback ratio: {len(feedback_events)/len(events)*100:.1f}%")
# Maximum depth reached
max_depth = max(e["history_len"] for e in events)
print(f"Maximum history depth: {max_depth}")
Feedback in the Extended Workflow¶
The extended workflow has 5 feedback edges:
| From | To | Label | Type |
|---|---|---|---|
| DE | BC | standards_edits | Critique-to-Create |
| RB | BC | operational_feedback | Create-to-Create |
| RB | RC | operational_findings | Create-to-Find |
| UG | BC | usability_feedback | Create-to-Create |
| UG | RC | user_feedback | Create-to-Find |
| AMS | BC | quality_feedback | Critique-to-Create |
| TS | BC | trace_updates | Cross-phase |
Note that TS (Traceability Specialist) also sends feedback to BC via trace_updates, which functions as cross-phase feedback since TS operates across all phases.
Topological Order and Feedback¶
The topological_order() method explicitly excludes feedback edges from its computation. This is because feedback edges create cycles, and topological sort requires a DAG (Directed Acyclic Graph):
# Topological order ignores feedback edges
topo = graph.topological_order()
print(f"Topological order: {topo}")
# Only considers handoff edges for ordering
BFS traversal (bfs_from()) follows all edges including feedback, which is why it can visit nodes multiple times via different paths (though it only records each node once in its output).
Next Steps¶
- Custom Quality Gates -- Configure quality gates that drive feedback
- Champion Orchestration -- How champions interact with feedback loops
- Reading Results -- Analyze feedback patterns in trace output