EU AI Act Compliance Dashboard¶
Artificial Intelligence Act
Jurisdiction: European Union | Effective: Phased 2024-2027 | Domain: AI/Digital
Overview¶
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It classifies AI systems by risk level (unacceptable, high, limited, minimal) and imposes requirements proportional to risk.
FCC Governance Mapping¶
| AI Act Requirement | FCC Governance Layer | Implementation |
|---|---|---|
| Risk classification | Quality gates | Automated risk level assessment |
| Transparency | Constitution mandatory patterns | Disclosure requirements |
| Human oversight | Collaboration engine | Human-in-the-loop gates |
| Data governance | Tag registry | Data quality capability tags |
| Technical documentation | Persona workflow | Documentation persona outputs |
Controls¶
AIACT-C001: AI Risk Classification¶
- Requirement: Classify AI systems according to risk levels
- Automated: Yes
- Evidence: Risk classification register, assessment methodology
AIACT-C002: Conformity Assessment¶
- Requirement: Conformity assessment for high-risk AI systems
- Automated: No
- Evidence: Conformity reports, third-party assessments
AIACT-C003: Human Oversight¶
- Requirement: Human oversight measures for high-risk AI
- Automated: No
- Evidence: Oversight protocols, human-in-the-loop documentation
AIACT-C004: Transparency Obligations¶
- Requirement: Notify users when interacting with AI systems
- Automated: Yes
- Evidence: Disclosure mechanisms, user notification logs
AIACT-C005: Technical Documentation¶
- Requirement: Maintain technical documentation throughout lifecycle
- Automated: Yes
- Evidence: System documentation, model cards, data sheets
Compliance Gates¶
| Gate | Control Ref | Requirement | Status |
|---|---|---|---|
| AIACT-G001 | AIACT-C001 | All AI systems risk-classified | |
| AIACT-G002 | AIACT-C002 | High-risk systems conformity assessed | |
| AIACT-G003 | AIACT-C003 | Human oversight documented and operational |
Metrics¶
| Metric | Target | Source | Trend |
|---|---|---|---|
| AI system classification coverage | 100% | ai_registry | Improving |
| Conformity assessment completion | 100% high-risk | assessment_tracker | Stable |
| Model documentation completeness | 95% | docs_system | Improving |
Recommended Actions¶
- Complete AI system inventory and risk classification
- Implement human oversight for high-risk systems
- Prepare model cards for all production AI systems
- Document data governance practices for training data
Cross-Regulation Overlaps¶
- GDPR — Data protection for AI training data
- DORA — Operational resilience for AI in financial services
- NIS2 — AI systems in critical infrastructure