Compliance Evidence Graph Data Flow¶
This diagram traces how EU AI Act requirements feed the FCC compliance pipeline, producing a persona risk classification, audit findings, an evidence graph, and a published report. It spans compliance/requirements.py, compliance/classifier.py, compliance/auditor.py, compliance/evidence_graph.py, and compliance/report.py. The registry currently holds 256+ EU AI Act requirements with NIST AI RMF crosswalks, and every finding carries structured EvidenceItem tuples so auditors can trace any claim back to its source. The resulting graph is a DAG suitable for RDF export.
The diagram below traces the classification-to-report pipeline.
flowchart LR
subgraph Requirements
R1[(RequirementRegistry)]
R2[ComplianceRequirement]
end
subgraph Classify
C1[PersonaSpec]
C2[AIActClassifier.classify_persona]
C3{RiskCategory}
end
subgraph Audit
A1[ComplianceAuditor.audit_persona]
A2[AuditFinding]
A3[EvidenceItem]
end
subgraph Graph
G1[EvidenceGraph.construct]
G2[(Requirement node)]
G3[(Finding node)]
G4[(Evidence node)]
end
subgraph Report
P1[ComplianceReport.generate]
P2[Remediation plan]
end
R1 --> R2
C1 --> C2 --> C3
C3 -- UNACCEPTABLE / HIGH / LIMITED / MINIMAL --> A1
R2 --> A1
A1 --> A2 --> A3
A2 --> G1
R2 --> G1
A3 --> G1
G1 --> G2
G1 --> G3
G1 --> G4
G2 -- has_finding --> G3
G3 -- supported_by --> G4
A2 --> P1 --> P2
Classification starts with AIActClassifier.classify_persona(spec), which maps a persona's R.I.S.C.E.A.R. spec against risk indicators (autonomy, domain, human impact) and returns one of four RiskCategory values. The auditor then iterates the requirements whose risk_category matches the classified tier, running the declared checks and collecting EvidenceItem(source, content, timestamp) tuples.
EvidenceGraph.construct(findings) wires the findings into a DAG rooted at the requirement, with has_finding edges to each AuditFinding and supported_by edges to every EvidenceItem. ComplianceReport.generate aggregates scores, attaches the remediation plan, and links each finding back to the NIST AI RMF subcategory for dual-regulation reporting.
Data shapes¶
- Stage 1 - Requirements:
RequirementRegistry.all_requirementsyieldsComplianceRequirement(id, regulation, article, risk_category, checks, nist_crosswalk, annex_refs, recital_refs). - Stage 2 - Classify:
AIActClassifier.classify_persona(spec)returns aRiskCategoryenum value. - Stage 3 - Filter: Auditor selects requirements where
requirement.risk_categoryis at or above the classified tier. - Stage 4 - Audit: Each
checkproduces anAuditFinding(requirement_id, status: FindingStatus, evidence, remediation: list[RemediationAction]). - Stage 5 - Evidence:
EvidenceItem(source, content, timestamp)attached to each finding. - Stage 6 - Graph:
EvidenceGraph.construct(findings)builds a DAG of Requirement -> Finding -> Evidence. - Stage 7 - Report:
ComplianceReport(findings, overall_status, compliance_score, remediation_plan)serialised to JSON or RDF.
See also¶
- Source:
src/fcc/compliance/models.py:30,src/fcc/compliance/requirements.py - Source:
src/fcc/compliance/classifier.py,src/fcc/compliance/auditor.py,src/fcc/compliance/evidence_graph.py,src/fcc/compliance/report.py - Related class diagram:
../class-diagrams/compliance-pipeline.md - For audience tier:
docs/for-professionals/governance-compliance.md