AI Compliance¶
How to use the FCC compliance module for regulatory alignment, risk classification, automated auditing, and evidence-based reporting in professional and enterprise environments.
Regulatory Landscape¶
FCC supports compliance automation for two major regulatory frameworks:
| Framework | Full Name | Status |
|---|---|---|
| EU AI Act | Regulation (EU) 2024/1689 | In force (Aug 2024), phased enforcement through 2027 |
| NIST AI RMF | AI Risk Management Framework 1.0 | Voluntary framework (Jan 2023) |
The compliance module maps FCC governance artifacts -- constitutions, quality gates, and persona specifications -- to specific articles and subcategories in both frameworks.
Risk Classification¶
EU AI Act Risk Tiers¶
The AIActClassifier assigns risk tiers based on persona
characteristics:
from fcc.compliance.classifier import AIActClassifier
from fcc.governance.constitution_registry import ConstitutionRegistry
from fcc.personas.registry import PersonaRegistry
registry = PersonaRegistry.from_data_dir()
const_reg = ConstitutionRegistry.from_registry(registry)
classifier = AIActClassifier(constitution_registry=const_reg)
for pid in registry.ids:
spec = registry.get(pid)
risk = classifier.classify_persona(spec)
if risk.value in ("high", "limited"):
print(f" {pid}: {risk.value} ({spec.category})")
Classification Criteria¶
| Criterion | Risk Tier | Rationale |
|---|---|---|
| 3+ hard-stop constitution rules | HIGH | Indicates safety-critical constraints |
| Governance/responsible_ai/jv_governance category | HIGH | Domain inherently high-risk |
| Decision-making keywords in role | LIMITED | Transparency obligations apply |
| Mandatory constitution patterns | LIMITED | Governance rules indicate regulated domain |
| Default | MINIMAL | Voluntary code of conduct |
System-Level Classification¶
For enterprise risk assessments, classify the entire system:
all_specs = [registry.get(pid) for pid in registry.ids]
system_risk = classifier.classify_system(all_specs)
print(f"System-level risk: {system_risk.value}")
The system takes the highest individual persona risk.
Running Compliance Audits¶
Single-Persona Audit¶
from fcc.compliance.auditor import ComplianceAuditor
from fcc.compliance.requirements import RequirementRegistry
req_registry = RequirementRegistry.from_package_data()
auditor = ComplianceAuditor(
requirement_registry=req_registry,
classifier=classifier,
constitution_registry=const_reg,
)
findings = auditor.audit_persona(registry.get("DGS"))
for f in findings:
if f.status.value != "not_applicable":
print(f" [{f.status.value}] {f.requirement_id}: {f.notes}")
Full-Registry Audit¶
report = auditor.full_audit(registry)
print(f"Pass rate: {report.passed}/{report.total_checks}")
print(f"Warnings: {report.warnings}")
Dual-Regulation Audit¶
Run both EU AI Act and NIST AI RMF simultaneously:
eu_report, nist_report = auditor.dual_regulation_audit(registry)
print(f"EU: {eu_report.passed}/{eu_report.total_checks}")
print(f"NIST: {nist_report.passed}/{nist_report.total_checks}")
Evidence Graphs¶
Build a knowledge graph of audit evidence for external compliance tools:
from fcc.compliance.evidence_graph import build_compliance_evidence_graph
graph = build_compliance_evidence_graph(
persona_registry=registry,
findings=list(report.findings),
constitution_registry=const_reg,
)
print(f"Evidence graph: {graph.node_count} nodes")
Export to Turtle, JSON-LD, or SKOS for integration with triple stores and enterprise compliance platforms.
Compliance Pipeline¶
The CompliancePipeline provides an event-driven orchestration layer:
from fcc.compliance.pipeline import CompliancePipeline
from fcc.messaging.bus import EventBus
pipeline = CompliancePipeline(
auditor=auditor,
event_bus=EventBus(),
persona_registry=registry,
)
result = pipeline.run_full_pipeline("EU_AI_ACT")
print(f"Duration: {result.duration_ms:.0f} ms")
print(f"Evidence nodes: {result.evidence_graph_nodes}")
Events emitted: compliance.audit.started, compliance.finding.raised,
compliance.remediation.required, compliance.audit.completed.
Remediation Tracking¶
High-priority remediations are surfaced automatically:
| Priority | Meaning | SLA |
|---|---|---|
| high | Hard-stop violation or HIGH-risk finding | Before next release |
| medium | Incomplete spec or LIMITED-risk finding | Within current sprint |
| low | Best-practice recommendation | Backlog |
Enterprise Integration Patterns¶
CI/CD Pipeline¶
# .github/workflows/compliance.yml
- name: Run compliance audit
run: fcc compliance audit --regulation eu-ai-act --output audit.json
- name: Check for failures
run: fcc compliance check --report audit.json --strict
Audit Trail Archiving¶
Use the event bus + session recorder to create immutable audit trails suitable for SOC 2 and ISO 27001 evidence packages.
Model Card Distribution¶
Generate model cards as part of each release and publish them alongside API documentation. This satisfies EU AI Act Article 11 (Technical Documentation) requirements.
NIST AI RMF Crosswalk¶
Each EU AI Act requirement carries a nist_crosswalk field:
| EU AI Act Article | NIST Subcategories |
|---|---|
| Art. 9 (Risk Management) | GOVERN 1.1, MAP 1.1, MANAGE 1.1 |
| Art. 10 (Data Governance) | MAP 2.1, MAP 2.2 |
| Art. 11 (Technical Docs) | GOVERN 1.2, MAP 3.1 |
| Art. 12 (Record-Keeping) | GOVERN 1.3, MEASURE 2.1 |
| Art. 14 (Human Oversight) | GOVERN 1.5, MANAGE 2.1 |
Related Resources¶
- Governance and Compliance -- FCC governance layer
- Enterprise Deployment -- Deployment patterns
- Guidebook Ch. 20 -- Full compliance reference
- EU AI Act Compliance Tutorial -- Hands-on walkthrough