Skip to content

Metrics

FCC provides measurable quality indicators at every level of the framework -- from code-level test coverage to persona-level documentation completeness. This page summarizes the key metrics available for tracking documentation program health.

Framework Quality Metrics

Test Coverage

The FCC codebase maintains rigorous test coverage as a baseline quality indicator.

Metric Value
Total tests 12,100+
Code coverage 100%
Minimum coverage target 99%
Python versions tested 3.10, 3.11, 3.12, 3.13

Full test coverage means every code path in the framework is exercised, reducing the risk of regressions when extending or customizing the framework.

Persona Specification Completeness

Each persona can be profiled across multiple specification layers. Completeness is measured by how many layers are populated.

Specification Layer Components Per Persona
R.I.S.C.E.A.R. (base 7) Role, Inputs, Style, Constraints, Expected Output, Archetype, Responsibilities Required
R.I.S.C.E.A.R. (extended 3) Role Skills, Role Collaborators, Role Adoption Checklist Required
Discernment Matrix 6 traits x 7 rating dimensions 42 data points
Design Target Factors 6 factors x 7 rating dimensions 42 data points
Dimension Profile 9 categories, 56 dimensions Up to 56 data points
Deliverables Named outputs with descriptions Variable
Collaboration Links Typed interaction records Variable

A fully specified persona has over 140 discrete data points defining its behavior, relationships, and assessment criteria.

Quality Gate Metrics

Gate Distribution

FCC's 25 quality gates are distributed across all persona categories.

Category Gates Average Threshold
Core 6 0.96
Integration 7 1.00
Governance 3 1.00
Stakeholder 5 0.95
Champion 4 1.00

Higher thresholds indicate stricter pass/fail criteria. Governance and champion gates uniformly require 100% compliance, reflecting the critical nature of their outputs.

Gate Pass Rates

In a well-configured deployment, gate pass rates provide early warning of quality issues.

  • First-pass rate: The percentage of personas whose output meets the quality gate threshold on the first simulation run. A declining first-pass rate signals prompt degradation or input quality issues.
  • Remediation rate: The percentage of gate failures resolved within one feedback cycle. High remediation rates indicate that the critique phase is functioning effectively.
  • Cumulative pass rate: The percentage of gates passed after all feedback cycles complete. This should converge to 100% in a healthy workflow.

Documentation Output Metrics

Generation Volume

A full documentation generation run produces:

Output Type Count
Tutorial files 504
Prompt template files 504
Workflow files 144
Cross-reference files 196
Total 1,348

Documentation Completeness

Completeness is measured per persona as the ratio of generated files to expected files. The fcc validate-docs command checks for empty or missing files and reports issues.

Level Measurement
Per persona Files generated / files expected (target: 56 per persona)
Per category Personas with complete documentation / total personas in category
Overall Total valid files / 1,348 expected files

Simulation Performance Metrics

Each simulation run records performance data in the trace output.

Metric Description
Total steps Number of persona invocations in the workflow
Total AI calls Number of language model API calls made
Total tokens Aggregate token consumption across all calls
Total latency (ms) Cumulative API response time
Duration (seconds) Wall-clock time for the entire simulation

These metrics enable cost modeling (tokens translate directly to API costs), performance optimization (identifying slow personas), and capacity planning (estimating throughput for larger workflows).

Time-to-Documentation Improvement

FCC's primary efficiency metric is the time from documentation request to validated output. The framework reduces this through:

  • Elimination of coordination overhead: Workflow graphs define execution order programmatically, replacing manual scheduling and handoff tracking.
  • Parallel execution potential: Independent personas within the same phase can execute concurrently.
  • Automated quality checks: Quality gates replace manual review cycles for structural and completeness checks.
  • Reusable scenarios: Once a scenario is defined, it can be re-executed against updated inputs without reconfiguration.

Next Steps