Skip to content

AI Act Audit Demo

This demo walks through the EU AI Act audit demo -- an interactive demonstration of risk classification, compliance auditing, evidence graph construction, and dual-regulation reporting.


Table of Contents

  1. Introduction and Prerequisites
  2. Launching the Demo
  3. Risk Classification
  4. Single Persona Audit
  5. Full Registry Audit
  6. Evidence Graph
  7. Dual-Regulation Report

Introduction and Prerequisites

System Requirements

  • Python 3.10+ with FCC installed (pip install -e ".[dev]")
  • No API key required

What This Demo Shows

The AI Act Audit Demo demonstrates how FCC maps its governance artifacts (constitutions, persona specifications, quality gates) to EU AI Act (Regulation 2024/1689) requirements and NIST AI RMF subcategories. It runs automated audits and produces structured reports.


Launching the Demo

fcc demo run ai-act-audit

Or programmatically:

from fcc.demos.registry import DemoRegistry

demo_reg = DemoRegistry()
demo = demo_reg.get("ai-act-audit")
demo.run()

Risk Classification

The demo classifies all 102 personas into EU AI Act risk tiers:

Risk Classification Summary:
  UNACCEPTABLE: 0
  HIGH: 8
  LIMITED: 24
  MINIMAL: 70

HIGH-risk personas:
  DGS - Data Governance Steward (governance)
  RAE - Responsible AI Ethicist (responsible_ai)
  EAG - Ethics Advisory Guardian (governance)
  ...

Classification rules: 1. 3+ hard-stop constitution rules = HIGH 2. Governance/responsible_ai category = HIGH 3. Decision-making role keywords = LIMITED 4. Mandatory constitution patterns = LIMITED 5. Default = MINIMAL


Single Persona Audit

The demo audits the Data Governance Steward against all applicable requirements:

Audit: DGS (Data Governance Steward)
  Risk tier: HIGH
  Applicable requirements: 12
  Findings:
    [pass] EU-AI-ACT-ART9-1: Risk Management System
      Evidence: constitution_registry (confidence: 100%)
      Evidence: persona_spec (confidence: 100%)
    [pass] EU-AI-ACT-ART11-1: Technical Documentation
      Evidence: persona_spec (confidence: 100%)
    [warning] EU-AI-ACT-ART12-1: Record-Keeping
      Remediation: [high] Complete audit trail configuration

Full Registry Audit

The demo runs a complete audit across all personas:

Full Registry Audit:
  Total checks: 1,224
  Passed: 1,180
  Failed: 0
  Warnings: 44
  Pass rate: 96.4%

Risk summary:
  minimal: 840
  limited: 288
  high: 96

Evidence Graph

The demo builds a compliance evidence knowledge graph:

Evidence Graph:
  Nodes: 456
  Edges: 892
  Node types: CONCEPT (requirements), DELIVERABLE (evidence),
              PERSONA, CONSTITUTION

The graph can be exported to Turtle, JSON-LD, or SKOS for external compliance tool integration.


Dual-Regulation Report

The demo runs both EU AI Act and NIST AI RMF audits:

Dual-Regulation Report:
  EU AI Act:  1,180/1,224 passed (96.4%)
  NIST AI RMF: 1,195/1,224 passed (97.6%)

NIST Crosswalk highlights:
  Art. 9 → GOVERN 1.1, MAP 1.1, MANAGE 1.1
  Art. 11 → GOVERN 1.2, MAP 3.1
  Art. 14 → GOVERN 1.5, MANAGE 2.1

Tips

  • Use --verbose for per-persona finding details
  • Export evidence graphs for inclusion in regulatory submissions
  • Re-run after constitution changes to verify risk category updates