Skip to content

EU AI Act Compliance

Duration: 60 minutes Level: Advanced Module: fcc.compliance

This tutorial walks you through using the FCC compliance module to classify personas by EU AI Act risk tier, run compliance audits against regulatory requirements, build evidence graphs, and generate remediation plans. It also covers dual-regulation auditing with the NIST AI RMF.

Prerequisites

  • Completed the Model Card Generation tutorial
  • Familiarity with ConstitutionRegistry and QualityGateRegistry
  • Understanding of EU AI Act risk categories

Background: EU AI Act Risk Tiers

The EU AI Act (Regulation 2024/1689) classifies AI systems into four risk tiers. FCC maps these tiers to persona characteristics:

Tier FCC Criteria
Unacceptable Not applicable to agent frameworks
High 3+ hard-stop constitution rules, or governance/responsible_ai category
Limited Decision-making keywords in role, or mandatory constitution patterns
Minimal Default for all other personas

Step 1: Classify Personas

from fcc.compliance.classifier import AIActClassifier
from fcc.governance.constitution_registry import ConstitutionRegistry
from fcc.personas.registry import PersonaRegistry

registry = PersonaRegistry.from_data_dir()
const_reg = ConstitutionRegistry.from_registry(registry)
classifier = AIActClassifier(constitution_registry=const_reg)

# Classify each persona
risk_counts = {"unacceptable": 0, "high": 0, "limited": 0, "minimal": 0}
for pid in registry.ids:
    spec = registry.get(pid)
    risk = classifier.classify_persona(spec)
    risk_counts[risk.value] += 1
    if risk.value == "high":
        print(f"  HIGH: {pid} ({spec.name}) - category: {spec.category}")

print(f"\nRisk distribution: {risk_counts}")

Step 2: Load Compliance Requirements

from fcc.compliance.requirements import RequirementRegistry

req_registry = RequirementRegistry.from_package_data()
all_reqs = req_registry.all_requirements()

print(f"Total requirements: {len(all_reqs)}")
for req in all_reqs[:5]:
    print(f"  {req.id}: {req.title} [{req.risk_category.value}]")
    if req.nist_crosswalk:
        print(f"    NIST crosswalk: {req.nist_crosswalk}")

Step 3: Run a Persona Audit

from fcc.compliance.auditor import ComplianceAuditor

auditor = ComplianceAuditor(
    requirement_registry=req_registry,
    classifier=classifier,
    constitution_registry=const_reg,
)

# Audit a single persona
spec = registry.get("DGS")  # Data Governance Steward
findings = auditor.audit_persona(spec)

print(f"Findings for {spec.id}:")
for f in findings:
    print(f"  [{f.status.value}] {f.requirement_id}")
    for ev in f.evidence:
        print(f"    Evidence: {ev.source} (confidence: {ev.confidence:.0%})")
    for rem in f.remediation:
        print(f"    Remediation: [{rem.priority}] {rem.description}")

Step 4: Full Registry Audit

report = auditor.full_audit(registry)

print(f"Total checks: {report.total_checks}")
print(f"Passed: {report.passed}")
print(f"Failed: {report.failed}")
print(f"Warnings: {report.warnings}")
print(f"Risk summary: {report.risk_summary}")

Step 5: Build an Evidence Graph

from fcc.compliance.evidence_graph import build_compliance_evidence_graph

graph = build_compliance_evidence_graph(
    persona_registry=registry,
    findings=list(report.findings),
    constitution_registry=const_reg,
)

print(f"Evidence graph: {graph.node_count} nodes, {graph.edge_count} edges")

# Export to Turtle format
from fcc.knowledge.serializers import serialize_turtle
ttl = serialize_turtle(graph)
with open("compliance_evidence.ttl", "w") as f:
    f.write(ttl)

The evidence graph encodes: - CONCEPT nodes for requirements - DELIVERABLE nodes for evidence items - PERSONA nodes for audited personas - CONSTITUTION nodes for persona constitutions - GOVERNS edges from requirements/constitutions to personas - MAPS_TO edges from evidence to requirements

Step 6: Dual-Regulation Audit

Run both EU AI Act and NIST AI RMF audits simultaneously:

eu_report, nist_report = auditor.dual_regulation_audit(registry)

print(f"EU AI Act: {eu_report.passed}/{eu_report.total_checks} passed")
print(f"NIST AI RMF: {nist_report.passed}/{nist_report.total_checks} passed")

Step 7: Run the Compliance Pipeline

The CompliancePipeline orchestrates auditing with event emission:

from fcc.compliance.pipeline import CompliancePipeline
from fcc.messaging.bus import EventBus

bus = EventBus()
pipeline = CompliancePipeline(
    auditor=auditor,
    event_bus=bus,
    persona_registry=registry,
)

# Subscribe to events
bus.subscribe_all(lambda e: print(f"  EVENT: {e.event_type.value}"))

result = pipeline.run_full_pipeline("EU_AI_ACT")
print(f"\nDuration: {result.duration_ms:.0f} ms")
print(f"Findings raised: {result.findings_raised}")
print(f"Remediations required: {result.remediations_required}")
print(f"Evidence graph nodes: {result.evidence_graph_nodes}")

Step 8: Review Remediation Actions

Extract and prioritise remediation actions:

high_priority = []
for finding in report.findings:
    for rem in finding.remediation:
        if rem.priority == "high":
            high_priority.append((finding.requirement_id, rem))

print(f"High-priority remediations: {len(high_priority)}")
for req_id, rem in high_priority[:10]:
    print(f"  {req_id}: {rem.description}")

Understanding the NIST Crosswalk

Each EU AI Act requirement carries a nist_crosswalk field that maps to NIST AI RMF subcategories:

EU AI Act NIST AI RMF FCC Feature
Art. 9 GOVERN 1.1, MAP 1.1 Constitution Registry
Art. 11 GOVERN 1.2, MAP 3.1 Model Card generation
Art. 12 GOVERN 1.3, MEASURE 2.1 Event bus audit trail
Art. 13 GOVERN 1.4, MAP 3.2 Workflow transparency
Art. 14 GOVERN 1.5, MANAGE 2.1 CollaborationEngine

Summary

In this tutorial you learned how to:

  • Classify personas into EU AI Act risk tiers
  • Load and inspect compliance requirements
  • Run single-persona and full-registry audits
  • Build knowledge-graph-based evidence graphs
  • Execute dual-regulation audits (EU AI Act + NIST AI RMF)
  • Use the compliance pipeline with event bus integration
  • Review and prioritise remediation actions

Next Steps