Chapter 5: The Collaboration Model¶
Learning Objectives¶
By the end of this chapter you will be able to:
- Describe the FCC collaboration model and its role in human-AI interaction.
- Explain sessions, turns, approval gates, and scoring in concrete terms.
- Distinguish between fully automated workflows and human-in-the-loop workflows.
- Describe how the collaboration engine integrates with the event bus and governance system.
- Articulate why collaboration sessions are recorded and how recordings enable replay.
The state diagram below shows the full lifecycle of a collaboration session, including the two terminal states that matter for audit: Completed for successful runs and Escalated for human-handoff cases.
stateDiagram-v2
[*] --> Created: create_session()
Created --> Active: start_session()
Active --> Active: add_turn(agent)
Active --> Paused: approval gate reached
Paused --> Active: human approves
Paused --> Active: human provides feedback
Active --> Completed: all nodes traversed
Active --> Escalated: hard-stop violation
Paused --> Escalated: timeout exceeded
Completed --> [*]
Escalated --> [*]
The Paused state is the main human-AI handoff point: humans provide either an approval or concrete feedback, both of which are captured on the session timeline before execution resumes.
The Human-AI Boundary¶
FCC is not a fully autonomous system. It is designed to work with humans, not replace them. The collaboration model defines where the human-AI boundary lies and what happens when information crosses that boundary.
In practice, the boundary appears at three points:
- Session initiation. A human defines the scenario, selects the workflow graph, and configures the simulation parameters. The human decides what to do; the agents decide how to do it.
- Approval gates. At designated points in the workflow, the system pauses and presents its current output to a human reviewer. The reviewer can approve, reject, or modify the output before the workflow continues.
- Escalation. When a hard-stop governance rule is violated or a quality gate fails after the maximum number of iterations, the system escalates to a human for resolution.
These three points ensure that humans retain meaningful control over the workflow's direction and outputs without needing to micromanage every step.
Sessions¶
A collaboration session is the top-level container for a human-AI interaction. It tracks:
- Session ID: A unique identifier.
- Scenario: The problem statement and configuration.
- Workflow graph: Which graph is being traversed.
- Participants: The personas activated in this session, plus any human reviewers.
- Turns: A chronological record of every persona activation, human input, and system event.
- Status: Active, paused, completed, or escalated.
The session model is defined in src/fcc/collaboration/models.py as a set of 11 frozen dataclasses. Frozen dataclasses ensure immutability -- once a turn is recorded, it cannot be altered, which is essential for audit integrity.
Session Lifecycle¶
A session follows this lifecycle:
- Created. The human provides a scenario and configuration. The collaboration engine (
src/fcc/collaboration/engine.py) initializes the session. - Active. The workflow engine traverses the graph, activating personas at each node. Each activation is recorded as a turn.
- Paused (optional). If an approval gate is reached or an escalation occurs, the session pauses and waits for human input.
- Resumed. The human provides input (approval, rejection, modification), and the workflow continues.
- Completed. All nodes in the graph have been traversed, all quality gates have passed, and the final deliverable is produced.
Turns¶
A turn is a single unit of work within a session. Each turn records:
- The persona that acted.
- The input it received.
- The output it produced.
- The timestamp.
- Any quality gate evaluations that occurred.
- Any governance checks that were performed.
Turns are the fundamental unit of the audit trail. By reviewing the turns of a session, a human can trace exactly what happened, who did it, and what quality checks were applied at every step.
Approval Gates¶
An approval gate is a point in the workflow where the system pauses for human review. Gates are placed at strategic points -- typically after major deliverables are produced or before irreversible actions are taken.
Each approval gate specifies:
- What to present: The artifact(s) the human should review.
- What to ask: A prompt for the human's input (e.g., "Does this competitive analysis meet your expectations?").
- Options: Approve, reject with feedback, or modify directly.
- Timeout: How long to wait before escalating if no human response is received.
Approval gates are different from quality gates. Quality gates are automated -- they check measurable thresholds. Approval gates are human -- they check subjective judgment, strategic alignment, or organizational context that automated systems cannot assess.
Scoring¶
The scoring engine (src/fcc/collaboration/scoring.py) evaluates deliverable quality at multiple levels:
- Gate-level scoring: Each quality gate produces a pass/fail result with an optional numeric score.
- Turn-level scoring: Each turn's output is scored against the persona's expected output specification.
- Session-level scoring: The overall session is scored based on the aggregation of turn-level and gate-level scores.
Scores are stored as part of the session record. They enable trend analysis across sessions: "Are our outputs improving over time?" "Which personas consistently produce high-scoring deliverables?" "Which quality gates are most frequently failed?"
The scoring engine also supports human ratings. At approval gates, the human reviewer can assign a 1--5 rating to the deliverable, along with free-text feedback. Human ratings are recorded alongside automated scores, enabling calibration between human judgment and automated evaluation.
Handoff Protocols¶
When control passes from AI to human (at an approval gate) or from human to AI (after approval), the collaboration engine follows a handoff protocol:
- Context transfer. The engine packages the current session state -- findings, artifacts, scores, and unresolved issues -- into a structured handoff document.
- Expectation setting. The handoff document includes what the receiving party (human or AI) is expected to do next.
- Acknowledgment. The receiving party acknowledges the handoff before proceeding.
This protocol prevents the common failure mode where a human reviewer receives a raw artifact with no context, or an AI agent resumes work without knowing what the human changed during their review.
Progress Tracking¶
The progress tracker (src/fcc/collaboration/progress.py) monitors completion across the workflow graph:
- How many nodes have been traversed.
- How many quality gates have been evaluated (and how many passed).
- How many approval gates have been cleared.
- Estimated time to completion.
Progress is surfaced through the CLI dashboard (Book 2, Chapter 7) and the event bus (Book 2, Chapter 6), enabling real-time visibility into long-running sessions.
Session Recording and Replay¶
The session recorder (src/fcc/collaboration/recording.py) persists every session to JSON, including all turns, scores, ratings, and events. Recorded sessions can be:
- Replayed for debugging: step through each turn to understand what happened.
- Compared for evaluation: run the same scenario with different configurations and compare the session scores.
- Audited for compliance: provide a complete record of all decisions, evaluations, and approvals.
Replay integrates with the event bus. During replay, the recorder re-emits all events in chronological order, allowing subscribers (logging, metrics, dashboards) to process them as if the session were running live.
Shared Context¶
The shared context (src/fcc/collaboration/context.py) provides an auditable key-value workspace that all personas and human reviewers can read from and write to during a session. Every read and write is logged, creating a transparent record of how shared information evolved over the session's lifetime.
Shared context is particularly important for multi-cycle workflows where findings from an early FCC cycle need to be available to later cycles. Without shared context, later personas would need to re-derive information that was already discovered.
Key Takeaways¶
- The collaboration model defines where the human-AI boundary lies: session initiation, approval gates, and escalation.
- Sessions are immutable records of turns, scores, and events.
- Approval gates enable subjective human review at strategic workflow points.
- The scoring engine evaluates quality at gate, turn, and session levels, with both automated and human ratings.
- Handoff protocols ensure context is preserved when control passes between human and AI.
- Session recording and replay enable debugging, comparison, and audit.
Cross-References¶
- Chapter 6: Ecosystem Overview -- how collaboration works across projects
- FCC Guidebook, Chapter 8 -- full collaboration engine reference
- Notebook 07: Collaboration Sessions -- interactive session walkthrough
- Book 2, Chapter 7: Collaboration Sessions -- building custom sessions
← Chapter 4: Quality and Governance | Next: Chapter 6 -- Ecosystem Overview →