Governance API¶
The fcc.governance package provides a tag registry for capability
classification and a quality gate runner for validating persona deliverables.
flowchart LR
A[Artifact Dict] --> QGR[QualityGateRunner]
QGR --> G1{Gate: completeness}
QGR --> G2{Gate: accuracy}
QGR --> G3{Gate: traceability}
G1 -->|check_passed >= threshold| R1[GateResult: PASS]
G1 -->|check_passed < threshold| R2[GateResult: FAIL]
G2 --> R1
G3 --> R1
R1 --> Summary[Results Summary]
R2 --> Summary
TagRegistry¶
The TagRegistry manages capability tags organized in a three-level hierarchy:
supercategory > category > capability. The bundled tag registry ships 30
tags across multiple categories.
Loading from YAML¶
from fcc._resources import get_governance_dir
from fcc.governance.tags import TagRegistry
tags = TagRegistry.from_yaml(get_governance_dir() / "tag_registry.yaml")
print(f"Total tags: {len(tags)}")
Querying Tags¶
from fcc._resources import get_governance_dir
from fcc.governance.tags import TagRegistry
tags = TagRegistry.from_yaml(get_governance_dir() / "tag_registry.yaml")
# Lookup by capability name
tag = tags.get("research_synthesis")
if tag:
print(f"Capability: {tag.capability}")
print(f"Category: {tag.category}")
print(f"Supercategory: {tag.supercategory}")
# Filter by category
analysis_tags = tags.by_category("analysis")
for t in analysis_tags:
print(f" {t.capability}")
# Filter by supercategory
creation_tags = tags.by_supercategory("creation")
for t in creation_tags:
print(f" {t.capability} ({t.category})")
# List all categories and supercategories
print(f"Categories: {tags.categories}")
print(f"Supercategories: {tags.supercategories}")
# Check membership
print("research_synthesis" in tags) # True or False
Iterating Over Tags¶
Adding Tags Programmatically¶
from fcc.governance.tags import Tag, TagRegistry
tags = TagRegistry()
tags.add(Tag(
capability="custom_analysis",
category="analysis",
supercategory="research",
))
tags.add(Tag(
capability="risk_assessment",
category="evaluation",
supercategory="governance",
))
print(f"Total: {len(tags)}")
Writing to YAML¶
The written file has the format:
tags:
- capability: custom_analysis
category: analysis
supercategory: research
- capability: risk_assessment
category: evaluation
supercategory: governance
QualityGateRunner¶
The QualityGateRunner loads quality gate definitions from YAML and
executes them against artifact dictionaries. The bundled configuration
ships 25 gates across all personas.
Loading Gates¶
from fcc._resources import get_governance_dir
from fcc.governance.quality_gates import QualityGateRunner
runner = QualityGateRunner.from_yaml(
get_governance_dir() / "quality_gates.yaml"
)
print(f"Total gates: {len(runner)}")
Inspecting Gates¶
from fcc._resources import get_governance_dir
from fcc.governance.quality_gates import QualityGateRunner
runner = QualityGateRunner.from_yaml(
get_governance_dir() / "quality_gates.yaml"
)
# List all gates
for gate in runner.gates:
print(f"{gate.id}: {gate.name} (persona: {gate.persona_id}, threshold: {gate.threshold})")
for check in gate.checks:
print(f" - {check}")
# Gates for a specific persona
rc_gates = runner.gates_for_persona("RC")
print(f"RC has {len(rc_gates)} quality gates")
Running a Single Gate¶
A gate checks each of its named checks against an artifact dictionary. Keys in the artifact dict should match the check names, with boolean values indicating pass/fail.
from fcc.governance.quality_gates import QualityGate, QualityGateRunner
gate = QualityGate(
id="RC-QG-001",
name="Research completeness",
persona_id="RC",
checks=["has_sources", "has_methodology", "has_findings", "has_gaps"],
threshold=0.75, # 75% of checks must pass
)
runner = QualityGateRunner([gate])
artifact = {
"has_sources": True,
"has_methodology": True,
"has_findings": True,
"has_gaps": False,
}
result = runner.run_gate(gate, artifact)
print(f"Passed: {result.passed}") # True (3/4 = 75% >= 75%)
print(f"Checks: {result.checks_passed}/{result.checks_total}")
for detail in result.details:
print(f" {detail}")
# has_sources: PASS
# has_methodology: PASS
# has_findings: PASS
# has_gaps: FAIL
Running All Gates¶
The run_all method takes a dictionary keyed by persona ID, where each
value is an artifact dict for that persona.
from fcc._resources import get_governance_dir
from fcc.governance.quality_gates import QualityGateRunner
runner = QualityGateRunner.from_yaml(
get_governance_dir() / "quality_gates.yaml"
)
# Build artifact dicts for each persona
artifacts = {
"RC": {
"has_sources": True,
"has_methodology": True,
"has_findings": True,
"has_gaps": True,
"has_citations": True,
},
"BC": {
"has_architecture_diagram": True,
"has_component_list": True,
"has_interface_spec": False,
"has_data_model": True,
},
}
results = runner.run_all(artifacts)
passed_count = sum(1 for r in results if r.passed)
print(f"Gates passed: {passed_count}/{len(results)}")
for r in results:
status = "PASS" if r.passed else "FAIL"
print(f" [{status}] {r.gate_id}: {r.checks_passed}/{r.checks_total}")
GateResult¶
| Field | Type | Description |
|---|---|---|
gate_id |
str |
The gate identifier |
passed |
bool |
Whether the pass rate met the threshold |
checks_passed |
int |
Number of checks that passed |
checks_total |
int |
Total number of checks |
details |
list[str] |
Per-check PASS/FAIL messages |
QualityGate¶
| Field | Type | Default | Description |
|---|---|---|---|
id |
str |
(required) | Unique gate identifier |
name |
str |
(required) | Human-readable name |
persona_id |
str |
(required) | Persona this gate applies to |
checks |
list[str] |
[] |
Named checks to evaluate |
threshold |
float |
1.0 |
Minimum pass rate (0.0 to 1.0) |