Testing Guide¶
The FCC framework maintains 100% test coverage across 12,100+ tests. This guide documents the testing patterns, fixture conventions, and strategies used to achieve and sustain this coverage level.
Coverage Policy¶
| Metric | Value |
|---|---|
| Minimum required | 99% |
| Current target | 100% |
| Current actual | 100% |
| Test count | 757 |
Coverage is enforced in CI. A pull request that drops coverage below 99% will fail the pipeline check. The goal is to maintain 100% -- every new feature or bug fix should include corresponding tests.
Test File Organization¶
Tests mirror the source structure under tests/. Every module in src/fcc/ has a corresponding test module:
src/fcc/personas/models.py → tests/test_personas/test_models.py
src/fcc/workflow/graph.py → tests/test_workflow/test_graph.py
src/fcc/simulation/engine.py → tests/test_simulation/test_engine.py
src/fcc/scaffold/cli.py → tests/test_scaffold/test_cli.py
src/fcc/governance/tags.py → tests/test_governance/test_tags.py
This convention makes it easy to find tests for any given module and ensures that no module is accidentally left untested.
Fixture Conventions¶
Persona Fixtures¶
Reusable persona fixtures provide consistent test data across modules:
import pytest
from fcc.personas.models import PersonaSpec, RISCEARSpec
@pytest.fixture
def sample_riscear():
return RISCEARSpec(
role="Test analyst",
inputs=["Source code", "Requirements"],
style="Analytical and structured",
constraints=["Relevant to scope", "No duplication"],
expected_output=["Capability matrix", "Research inventory"],
archetype="The Investigator",
responsibilities=["Curate knowledge bases"],
role_skills=["Research methodology"],
role_collaborators=["Delivers to BC"],
role_adoption_checklist=["Inventory complete"],
)
@pytest.fixture
def sample_persona(sample_riscear):
return PersonaSpec(
id="RC",
name="Research Crafter",
fcc_phase="Find",
role_title="Senior Analyst",
riscear=sample_riscear,
)
Registry Fixtures¶
Load the real registry from data files for integration tests:
@pytest.fixture
def full_registry():
from fcc.personas.registry import PersonaRegistry
data_dir = Path(__file__).parent.parent / "data" / "personas"
return PersonaRegistry.from_yaml_directory(data_dir)
Workflow Fixtures¶
@pytest.fixture
def base_graph():
from fcc.workflow.graph import WorkflowGraph
path = Path(__file__).parent.parent / "data" / "workflows" / "base_sequence.json"
return WorkflowGraph.from_json(path)
Temporary File Fixtures¶
Use tmp_path for tests that write files:
def test_write_traces(tmp_path):
output = tmp_path / "traces.json"
write_traces(results, str(output))
assert output.exists()
data = json.loads(output.read_text())
assert len(data) > 0
Parametrized Testing¶
Parametrized tests are used extensively to validate behavior across all personas, categories, and edge types without duplicating test logic.
Testing All Personas¶
import pytest
ALL_PERSONA_IDS = ["RC", "BC", "DE", "RB", "UG", "CIA", "UMC", "STE", "TS",
"BV", "RIC", "GCA", "DGS", "PTE", "AMS", "CO", "SMC",
"EC", "RS", "SCP", "RCHM", "BCHM", "UGCH", "RBCH"]
@pytest.mark.parametrize("persona_id", ALL_PERSONA_IDS)
def test_persona_has_riscear(full_registry, persona_id):
persona = full_registry[persona_id]
assert persona.riscear.role
assert len(persona.riscear.inputs) > 0
assert persona.riscear.archetype
Testing All Workflow Graphs¶
WORKFLOW_FILES = ["base_sequence.json", "extended_sequence.json", "complete_24.json"]
@pytest.mark.parametrize("workflow_file", WORKFLOW_FILES)
def test_workflow_loads(workflow_file):
path = Path("data/workflows") / workflow_file
graph = WorkflowGraph.from_json(path)
assert len(graph) > 0
Testing All Quality Gates¶
@pytest.mark.parametrize("gate", quality_gates)
def test_gate_has_checks(gate):
assert gate.id.startswith("QG-")
assert len(gate.checks) > 0
assert 0.0 <= gate.threshold <= 1.0
Testing Patterns¶
Dataclass Immutability¶
Verify that frozen dataclasses reject mutation:
def test_persona_is_immutable(sample_persona):
with pytest.raises(FrozenInstanceError):
sample_persona.name = "Modified"
Round-Trip Serialization¶
Verify that from_dict and to_dict are inverses:
def test_rating_dimensions_roundtrip():
original = RatingDimensions(self_rating=0.8, peer_rating=0.9)
data = original.to_dict()
restored = RatingDimensions.from_dict(data)
assert restored == original
CLI Testing¶
Use Click's CliRunner for testing CLI commands:
from click.testing import CliRunner
from fcc.scaffold.cli import main
def test_validate_command(tmp_path):
runner = CliRunner()
result = runner.invoke(main, ["validate", "--dir", str(tmp_path)])
assert result.exit_code == 1 # Missing required files
Graph Traversal¶
Test BFS and topological sort for correctness:
def test_bfs_visits_all_nodes(base_graph):
order = base_graph.bfs_from("RC")
assert set(order) == set(base_graph.node_ids)
def test_topological_order_respects_handoffs(base_graph):
order = base_graph.topological_order()
rc_idx = order.index("RC")
bc_idx = order.index("BC")
assert rc_idx < bc_idx # RC must come before BC
Schema Validation¶
Test that data files conform to their schemas:
def test_workflow_schema_validation():
graph = WorkflowGraph.from_json_validated(
"data/workflows/base_sequence.json",
"data/schemas/workflow.schema.json",
)
assert len(graph) == 5
Running Tests¶
# Full suite with coverage report
make test
# Verbose output
pytest tests/ -v
# Specific module
pytest tests/test_personas/ -v
# Pattern matching
pytest -k "test_champion"
# Coverage report to terminal
pytest --cov=src/fcc --cov-report=term-missing
# HTML coverage report
pytest --cov=src/fcc --cov-report=html
Related Pages¶
- Contributing -- testing requirements for pull requests
- Architecture -- understanding what to test
- Extension Guide -- testing custom extensions