Chapter 20: Compliance Automation¶
As AI systems come under increasing regulatory scrutiny, the ability to demonstrate compliance programmatically becomes a necessity rather than a luxury. This chapter covers the FCC compliance automation module, which maps the framework's governance artifacts to the EU AI Act (Regulation 2024/1689) and the NIST AI Risk Management Framework, then automates auditing, evidence collection, remediation tracking, and dual-regulation reporting.
The flowchart below shows how the RequirementRegistry, PersonaRegistry, and ConstitutionRegistry feed the ComplianceAuditor, which produces findings, evidence items, remediation actions, a report, and event-bus notifications.
flowchart TD
REG[(RequirementRegistry<br/>256+ EU AI Act<br/>29 NIST AI RMF)] --> AUD[ComplianceAuditor]
PR[PersonaRegistry] --> CLS[AIActClassifier]
CR[ConstitutionRegistry] --> CLS
CLS -->|Risk Category| AUD
PR --> AUD
AUD --> FIND[AuditFindings]
FIND --> EV[EvidenceItems]
FIND --> REM[RemediationActions]
FIND --> RPT[ComplianceReport]
EV --> EG[Evidence Graph]
EG -->|Turtle / JSON-LD| EXT[External Compliance Tools]
AUD --> EB[EventBus]
EB --> |audit.started| SUB[Subscribers]
EB --> |finding.raised| SUB
EB --> |remediation.required| SUB
EB --> |audit.completed| SUB
style AUD fill:#2196F3,color:#fff
style EG fill:#4CAF50,color:#fff
style RPT fill:#9C27B0,color:#fff
Separating evidence graphs from the report itself is what lets auditors export just the provenance data without the narrative findings.
EU AI Act Overview¶
The European Union's Artificial Intelligence Act (Regulation 2024/1689) establishes a risk-based regulatory framework for AI systems. It entered into force in August 2024, with provisions phasing in through 2027.
Risk Classification¶
The Act classifies AI systems into four risk tiers:
| Tier | Description | Requirements | FCC Mapping |
|---|---|---|---|
| Unacceptable | Prohibited systems (social scoring, real-time biometric surveillance) | Banned outright | N/A for agent frameworks |
| High | Systems in critical domains (employment, credit, law enforcement, education) | Full conformity assessment, technical documentation, human oversight | Personas with 3+ hard-stop rules or governance/responsible_ai categories |
| Limited | Systems with transparency obligations (chatbots, emotion recognition) | Transparency and disclosure requirements | Personas with decision-making roles or mandatory constitution patterns |
| Minimal | All other AI systems | Voluntary codes of conduct | Default for most FCC personas |
Key Articles¶
The FCC compliance module maps the following articles:
| Article | Title | FCC Implementation |
|---|---|---|
| Art. 9 | Risk Management System | ComplianceAuditor + AIActClassifier |
| Art. 10 | Data and Data Governance | Datasheet generation |
| Art. 11 | Technical Documentation | ModelCard generation |
| Art. 12 | Record-Keeping | Event bus audit trail |
| Art. 13 | Transparency | Workflow graph + trace explainability |
| Art. 14 | Human Oversight | CollaborationEngine + approval gates |
| Art. 15 | Accuracy, Robustness, Cybersecurity | CLEAR+ Reliability + Assurance metrics |
NIST AI RMF Overview¶
The NIST AI Risk Management Framework (AI RMF 1.0, January 2023) provides a voluntary, non-regulatory companion framework structured around four core functions:
| Function | Description | FCC Mapping |
|---|---|---|
| GOVERN | Establish AI governance structures | Constitution Registry, Quality Gates |
| MAP | Identify and classify AI risks | AIActClassifier, Risk categorisation |
| MEASURE | Quantify risk with metrics | CLEAR+ benchmarks |
| MANAGE | Prioritise and respond to risks | Remediation workflows |
NIST-EU Crosswalk¶
Each ComplianceRequirement in FCC carries a nist_crosswalk field
that maps EU AI Act articles to NIST AI RMF subcategories:
| EU AI Act Article | NIST AI RMF Subcategory |
|---|---|
| Art. 9 (Risk Management) | GOVERN 1.1, MAP 1.1, MANAGE 1.1 |
| Art. 10 (Data Governance) | MAP 2.1, MAP 2.2 |
| Art. 11 (Technical Docs) | GOVERN 1.2, MAP 3.1 |
| Art. 12 (Record-Keeping) | GOVERN 1.3, MEASURE 2.1 |
| Art. 13 (Transparency) | GOVERN 1.4, MAP 3.2 |
| Art. 14 (Human Oversight) | GOVERN 1.5, MANAGE 2.1 |
| Art. 15 (Accuracy/Robustness) | MEASURE 1.1, MEASURE 3.1 |
Risk Classification¶
AIActClassifier¶
The AIActClassifier assigns EU AI Act risk categories to FCC personas
and workflows:
from fcc.compliance.classifier import AIActClassifier
from fcc.governance.constitution_registry import ConstitutionRegistry
from fcc.personas.registry import PersonaRegistry
registry = PersonaRegistry.from_data_dir()
const_reg = ConstitutionRegistry.from_registry(registry)
classifier = AIActClassifier(constitution_registry=const_reg)
# Classify a single persona
spec = registry.get("DGS") # Data Governance Steward
risk = classifier.classify_persona(spec)
print(f"{spec.id}: {risk.value}") # "high"
# Classify the full system
all_specs = [registry.get(pid) for pid in registry.ids]
system_risk = classifier.classify_system(all_specs)
print(f"System risk: {system_risk.value}") # "high" (highest individual risk)
Classification Logic¶
The classifier applies rules in priority order:
- Hard-stop rules: 3+ hard-stop constitution rules implies HIGH risk
- Category membership:
governance,responsible_ai, orjv_governancecategory implies HIGH risk - Role keywords: Decision-making keywords (
approve,reject,evaluate,classify,judge,recommend) in the role description imply LIMITED risk - Mandatory patterns: Any mandatory constitution patterns imply LIMITED risk
- Default: All other personas are classified as MINIMAL risk
Workflow Classification¶
Workflows are classified at the LIMITED level by default because they aggregate multiple persona outputs. To get a more precise classification, classify each persona individually and take the maximum:
risk = classifier.classify_workflow(workflow_graph)
print(f"Workflow risk: {risk.value}") # "limited"
Running Compliance Audits¶
ComplianceRequirement¶
Each compliance requirement is a frozen dataclass with regulation traceability:
from fcc.compliance.models import ComplianceRequirement, RiskCategory
req = ComplianceRequirement(
id="EU-AI-ACT-ART9-1",
regulation="EU_AI_ACT",
article="Art. 9",
sub_article="1",
title="Risk Management System",
description="Establish a risk management system for high-risk AI systems.",
risk_category=RiskCategory.HIGH,
checks=("constitution_defined", "riscear_complete"),
nist_crosswalk="GOVERN 1.1, MAP 1.1",
annex_refs=("Annex III",),
recital_refs=("Recital 47",),
)
RequirementRegistry¶
Requirements are loaded from YAML data files:
from fcc.compliance.requirements import RequirementRegistry
req_registry = RequirementRegistry.from_package_data()
all_reqs = req_registry.all_requirements()
print(f"Total requirements: {len(all_reqs)}")
# Filter by regulation
eu_reqs = [r for r in all_reqs if r.regulation == "EU_AI_ACT"]
nist_reqs = [r for r in all_reqs if r.regulation == "NIST_AI_RMF"]
print(f"EU AI Act: {len(eu_reqs)}, NIST: {len(nist_reqs)}")
ComplianceAuditor¶
The ComplianceAuditor checks personas and workflows against
requirements:
from fcc.compliance.auditor import ComplianceAuditor
auditor = ComplianceAuditor(
requirement_registry=req_registry,
classifier=classifier,
constitution_registry=const_reg,
)
# Audit a single persona
findings = auditor.audit_persona(registry.get("RC"))
for f in findings:
print(f" [{f.status.value}] {f.requirement_id}")
for ev in f.evidence:
print(f" Evidence: {ev.source} (confidence: {ev.confidence})")
for rem in f.remediation:
print(f" Remediation: [{rem.priority}] {rem.description}")
# Full audit of all personas
report = auditor.full_audit(registry)
print(f"Total checks: {report.total_checks}")
print(f"Passed: {report.passed}")
print(f"Failed: {report.failed}")
print(f"Warnings: {report.warnings}")
Audit Finding Model¶
Each finding carries evidence and remediation actions:
| Field | Type | Description |
|---|---|---|
requirement_id |
str |
Which requirement was checked |
status |
FindingStatus |
PASS, FAIL, WARNING, or NOT_APPLICABLE |
evidence |
tuple[EvidenceItem, ...] |
Supporting evidence items |
remediation |
tuple[RemediationAction, ...] |
Recommended fixes |
notes |
str |
Additional context |
Evidence Items¶
Evidence items record what was checked and how confident the determination is:
from fcc.compliance.models import EvidenceItem
evidence = EvidenceItem(
source="constitution_registry",
content="Persona DGS has 4 hard-stop rules.",
confidence=1.0,
)
Evidence Graphs¶
The compliance module can build a knowledge graph of audit evidence
using the existing KnowledgeGraph infrastructure:
from fcc.compliance.evidence_graph import build_compliance_evidence_graph
graph = build_compliance_evidence_graph(
persona_registry=registry,
findings=list(report.findings),
constitution_registry=const_reg,
)
print(f"Evidence graph: {graph.node_count} nodes, {graph.edge_count} edges")
The evidence graph uses existing node and edge types:
| Entity | Node Type | Description |
|---|---|---|
| Requirement | CONCEPT |
A compliance requirement |
| Evidence | DELIVERABLE |
A piece of supporting evidence |
| Persona | PERSONA |
The persona being audited |
| Constitution | CONSTITUTION |
The persona's constitution |
| Relationship | Edge Type | Description |
|---|---|---|
| Evidence supports requirement | MAPS_TO |
Evidence links to requirement |
| Requirement governs persona | GOVERNS |
Requirement applies to persona |
| Constitution governs persona | GOVERNS |
Constitution constrains persona |
Evidence graphs can be exported to Turtle, JSON-LD, or SKOS using the existing knowledge graph serializers, enabling integration with external compliance tools and triple stores.
Remediation Workflows¶
RemediationAction¶
Each remediation action specifies what needs to be done, its priority, and an optional deadline:
from fcc.compliance.models import RemediationAction
action = RemediationAction(
action_id="REM-ART9-DGS-const",
description="Define constitution for persona DGS.",
priority="high",
deadline="2026-04-15",
)
Remediation Priorities¶
| Priority | Meaning | Typical Response |
|---|---|---|
high |
Hard-stop rule violation or HIGH risk finding | Fix before next release |
medium |
Incomplete specification or LIMITED risk finding | Fix within current sprint |
low |
Best-practice recommendation | Backlog item |
Tracking Remediation Progress¶
Combine remediation actions with the event bus to track progress:
from fcc.messaging.bus import EventBus
from fcc.messaging.events import Event, EventType
bus = EventBus()
# Subscribe to remediation events
remediation_log = []
bus.subscribe(
EventType("compliance.remediation.required"),
lambda e: remediation_log.append(e.payload),
)
# Run the audit pipeline -- events are emitted automatically
from fcc.compliance.pipeline import CompliancePipeline
pipeline = CompliancePipeline(
auditor=auditor,
event_bus=bus,
persona_registry=registry,
)
result = pipeline.run_full_pipeline()
print(f"Remediations required: {len(remediation_log)}")
for item in remediation_log:
print(f" [{item['priority']}] {item['action_id']}: {item.get('requirement_id')}")
Dual-Regulation Audits¶
Running Both EU AI Act and NIST AI RMF¶
The ComplianceAuditor supports dual-regulation audits that check
personas against both frameworks simultaneously:
eu_report, nist_report = auditor.dual_regulation_audit(registry)
print(f"EU AI Act: {eu_report.passed}/{eu_report.total_checks} passed")
print(f"NIST AI RMF: {nist_report.passed}/{nist_report.total_checks} passed")
print(f"Risk summary (EU): {eu_report.risk_summary}")
print(f"Risk summary (NIST): {nist_report.risk_summary}")
CompliancePipeline¶
The CompliancePipeline orchestrates the full audit workflow with event
emission:
from fcc.compliance.pipeline import CompliancePipeline
pipeline = CompliancePipeline(
auditor=auditor,
event_bus=bus,
persona_registry=registry,
)
# Single regulation
result = pipeline.run_full_pipeline("EU_AI_ACT")
print(f"Duration: {result.duration_ms:.0f} ms")
print(f"Findings raised: {result.findings_raised}")
print(f"Remediations: {result.remediations_required}")
print(f"Evidence graph nodes: {result.evidence_graph_nodes}")
# Dual regulation
eu_result, nist_result = pipeline.run_dual_pipeline()
Pipeline Event Flow¶
The pipeline emits four event types:
| Event | When | Payload |
|---|---|---|
compliance.audit.started |
Pipeline begins | regulation, persona_count |
compliance.finding.raised |
FAIL or WARNING finding | requirement_id, status |
compliance.remediation.required |
Remediation needed | requirement_id, action_id, priority |
compliance.audit.completed |
Pipeline finishes | total_checks, passed, failed, warnings, duration_ms |
Subscribe to these events for real-time dashboards, Slack notifications, or audit log aggregation.
Compliance Report¶
The ComplianceReport frozen dataclass summarises audit results:
| Field | Type | Description |
|---|---|---|
regulation |
str |
Regulation name |
total_checks |
int |
Total findings |
passed |
int |
PASS count |
failed |
int |
FAIL count |
warnings |
int |
WARNING count |
findings |
tuple[AuditFinding, ...] |
All findings |
risk_summary |
dict[str, int] |
Count per risk category |
Reports can be serialised to JSON for storage or dashboard consumption.
Practical Exercises¶
Exercise 1: Classify All Personas¶
Use the AIActClassifier to classify all 102 personas. Group the
results by risk category and verify that governance personas are
correctly classified as HIGH.
Exercise 2: Run a Full Audit¶
Use the ComplianceAuditor to audit all personas against EU AI Act
requirements. Identify which personas generate WARNING findings and
what remediations are recommended.
Exercise 3: Build an Evidence Graph¶
Run a full audit, then use build_compliance_evidence_graph to
construct a knowledge graph. Export it to Turtle format and verify
that GOVERNS edges correctly link requirements to personas.
Exercise 4: Dual-Regulation Pipeline¶
Run the CompliancePipeline.run_dual_pipeline() and compare the EU AI
Act and NIST AI RMF reports. Identify requirements that overlap between
the two frameworks using the nist_crosswalk field.
Summary¶
| Component | Purpose | Module |
|---|---|---|
| RiskCategory | EU AI Act risk tiers | fcc.compliance.models |
| ComplianceRequirement | Regulation requirements | fcc.compliance.models |
| AIActClassifier | Risk classification | fcc.compliance.classifier |
| ComplianceAuditor | Audit engine | fcc.compliance.auditor |
| ComplianceReport | Audit summary | fcc.compliance.report |
| EvidenceItem | Audit evidence | fcc.compliance.models |
| RemediationAction | Fix recommendations | fcc.compliance.models |
| CompliancePipeline | Orchestrated audit + events | fcc.compliance.pipeline |
| build_compliance_evidence_graph | Evidence knowledge graph | fcc.compliance.evidence_graph |
Compliance automation closes the loop between FCC governance (constitutions, quality gates, tags) and external regulatory frameworks. By automating classification, auditing, evidence collection, and remediation tracking, the framework enables teams to maintain continuous compliance rather than relying on periodic manual reviews.
Next Steps
- Read Chapter 19 for the CLEAR+ evaluation methodology
- Explore the EU AI Act Compliance Tutorial for hands-on practice
- See the AI Compliance Guide for professional deployment guidance