Skip to content

What is FCC?

Documentation at scale is a coordination problem. A single author can write a good tutorial. But producing hundreds of consistent, high-quality documents -- tutorials, runbooks, API references, user guides -- requires a team. When that team is composed of AI agents, you need a framework to orchestrate who does what, in what order, and to what standard.

The FCC (Find, Create, Build, Critique, Ops) Agent Team Framework solves this problem. It provides a structured methodology for coordinating 147 specialized AI agent personas (102 core + 45 vertical) through repeatable documentation workflows, producing consistent output at scale. The framework includes a plugin system for ecosystem extensibility, a collaboration engine for human-in-the-loop review, and an event-driven observability layer for auditing every step.

The Problem

Consider what happens when you ask a single large language model to "write documentation" for a complex system. You get output, but you do not get:

  • Structured research that identifies what needs to be documented and what sources exist.
  • Specialized creation where different document types (tutorials vs. runbooks vs. executive summaries) are handled by agents tuned for those formats.
  • Systematic critique where output is reviewed against quality gates before it ships.
  • Governance that ensures compliance, traceability, and anti-hallucination safeguards.

FCC addresses each of these gaps by decomposing the documentation problem into five phases, assigning specialized personas to each phase, and connecting them through workflow graphs.

The Five Phases

FCC organizes documentation work into a continuous cycle of five phases.

graph LR
    F["Find<br/>Research & Discovery"] --> C["Create<br/>Drafting & Assembly"]
    C --> B["Build<br/>Engineering & Pipelines"]
    B --> CR["Critique<br/>Review & Validation"]
    CR --> O["Ops<br/>Deployment & Monitoring"]
    O -->|feedback| F

    style F fill:#4CAF50,color:#fff,stroke:#388E3C
    style C fill:#2196F3,color:#fff,stroke:#1565C0
    style B fill:#9C27B0,color:#fff,stroke:#7B1FA2
    style CR fill:#FF9800,color:#fff,stroke:#EF6C00
    style O fill:#795548,color:#fff,stroke:#5D4037

Find

The Find phase is about research and discovery. Personas in this phase survey the landscape: what documentation already exists, what capabilities need to be covered, what sources are authoritative, and where the gaps lie. The Research Crafter leads this phase, producing capability matrices, annotated references, and traceability maps that feed the next phase.

Create

The Create phase transforms research into documentation artifacts. Different personas specialize in different output types. The Blueprint Crafter produces architecture documentation. The User Guide Crafter produces end-user guides. The Runbook Crafter produces operational procedures. Each persona follows its own R.I.S.C.E.A.R. specification, which defines its role, style, constraints, and expected outputs.

Build

The Build phase handles engineering infrastructure: CI/CD pipelines, data engineering workflows, ML lifecycle management, and DevOps automation. Personas like the Data Pipeline Engineer, Feature Engineer, and CI/CD Orchestrator ensure that documentation artifacts are integrated into production systems with proper automation and testing.

Critique

The Critique phase validates what was created. The Documentation Evangelist reviews output against style guides and quality standards. The Blueprint Validator checks architectural documentation for completeness. Governance personas like the Anti-fact Mitigation Specialist verify source attribution and flag potential hallucinations. Feedback from the Critique phase flows back to Find, creating a continuous improvement loop.

Ops

The Ops phase covers deployment, monitoring, and operational maintenance. Personas in this phase manage the ongoing lifecycle of documentation systems, including observability, incident response, and continuous improvement based on production metrics.

When to Use FCC

FCC is designed for situations where documentation requirements exceed what a single author or a single prompt can handle effectively.

FCC is a good fit when you need to:

  • Generate documentation across multiple document types (tutorials, runbooks, guides, specifications) from a shared knowledge base.
  • Maintain consistency across a large documentation corpus where different sections are produced by different agents.
  • Apply governance controls -- quality gates, compliance checks, privacy classifications -- to AI-generated content.
  • Simulate documentation workflows before committing to production runs, using deterministic or AI-powered engines.
  • Scale a documentation program from a small pilot (5 core personas) to an enterprise deployment (147 personas across 20 core categories + 6 vertical packs with champion orchestration).
  • Integrate human-in-the-loop review via the collaboration engine with approval gates and quality scoring.
  • Extend the framework with custom plugins for personas, engines, templates, scorers, and more.

FCC may not be the right fit when:

  • You need a single document written once. A direct prompt to a language model is simpler.
  • Your documentation has no quality or compliance requirements. FCC's governance layer adds structure that may be unnecessary for informal content.
  • You are not working with AI-generated content. FCC is purpose-built for agent orchestration.

How FCC Works in Practice

A typical FCC workflow proceeds as follows:

  1. Define a scenario -- A scenario specifies what documentation needs to be produced, what inputs are available, and what quality gates apply. FCC ships with starter scenarios like GEN-001 (general documentation generation).

  2. Select a workflow graph -- FCC provides multiple workflow graphs of increasing complexity: a 5-node base sequence (core personas only), a 20-node extended sequence (core plus integration and governance), a 24-node complete graph (all personas including champions), a 55-node extended graph (all 102 core personas), and solution-level EAIFC graphs.

  3. Run a simulation -- The simulation engine traverses the workflow graph, invoking each persona in sequence. In mock mode, it generates deterministic traces for testing. In AI mode, it calls language model APIs with persona-aware system prompts.

  4. Review traces and output -- Each simulation produces a trace file recording every step: which persona acted, what it received, what it produced, and how long it took. Traces are structured JSON, suitable for analysis and auditing.

  5. Generate documentation -- The docs-as-code generator uses Jinja2 templates to produce documentation files from persona specifications.

Architecture at a Glance

FCC is a Python package (src/fcc/) organized into twelve modules:

Module Purpose
personas/ Persona models, registry, YAML loading, dimensions, cross-references
workflow/ Workflow graph models, action engine, 6-action-type system
simulation/ Deterministic and AI-powered simulation engines, trace generation
scenarios/ Scenario models, loading, dynamic validation
scaffold/ CLI tools, project scaffolding, docs-as-code generation
governance/ Capability tags, quality gates, compliance checks, constitutions
messaging/ Thread-safe event bus with 81 event types, filtering, serialization, replay
plugins/ 10 plugin types with entry-point discovery, validation, and orchestration
observability/ Structured tracing (OTel-compatible), metrics collection, exporters
collaboration/ Human-in-the-loop sessions, scoring, approval gates, progress tracking
dashboard/ Terminal-based ASCII dashboards for ecosystem, personas, quality
ecosystem/ Project registry, port allocation, dependency management

For a complete list of framework terms, see the Glossary. To start using FCC immediately, proceed to the Quickstart.