Skip to content

Governance Audit Flow

Duration: 60 minutes Difficulty: Intermediate Pattern: Governance Gate + Sequential Chain

This scenario demonstrates a compliance audit workflow with evidence gathering, scoring, escalation paths, and quality gate enforcement using governance personas.

Scenario Overview

Problem: A data processing system needs a compliance audit before deployment. The audit must verify governance policies, privacy controls, and anti-misinformation safeguards, with escalation for any findings.

Goal: Execute a four-persona governance audit that produces an audit report with compliance scores and escalation decisions.

Persona Team

Persona ID Role in Audit Category
Data Governance Steward DGS Initiates audit, defines scope governance
Governance Compliance Auditor GCA Conducts formal audit, scores compliance governance
Privacy Taxonomy Expert PTE Assesses privacy controls and data handling privacy
Anti-fact Mitigation Specialist AMS Reviews for misinformation risks responsible_ai

Setup

from fcc.personas.registry import PersonaRegistry
from fcc.simulation.engine import SimulationEngine
from fcc.simulation.messages import SimulationMessage
from fcc.messaging.bus import EventBus
from fcc.messaging.events import Event, EventType
from fcc.collaboration.models import ApprovalGate, ApprovalStatus

registry = PersonaRegistry.from_yaml_directory("src/fcc/data/personas")
bus = EventBus()
engine = SimulationEngine(registry=registry, mode="deterministic")

audit_subject = "Customer analytics data processing pipeline v2.3"

Phase 1: Audit Initiation

The Data Governance Steward defines the audit scope and criteria:

initiation_message = SimulationMessage(
    sender="orchestrator",
    receiver="DGS",
    content=(
        f"Initiate a compliance audit for: {audit_subject}. "
        "Define the audit scope covering: data governance policies, "
        "data quality standards, access control policies, "
        "retention policies, and regulatory compliance (GDPR, CCPA). "
        "Produce an audit plan with specific checkpoints."
    ),
    phase="find",
)

audit_plan = engine.step(initiation_message)
print(f"Audit plan: {len(audit_plan.content)} chars")

bus.publish(Event(
    event_type=EventType.COLLABORATION_SESSION_STARTED,
    source="governance_audit",
    payload={"subject": audit_subject, "phase": "initiation"},
))

Phase 2: Evidence Gathering

The Governance Compliance Auditor conducts the formal audit:

audit_message = SimulationMessage(
    sender="DGS",
    receiver="GCA",
    content=(
        f"Conduct a formal compliance audit based on this plan:\n\n"
        f"{audit_plan.content[:500]}\n\n"
        "For each checkpoint, gather evidence, assess compliance level "
        "(compliant/partially_compliant/non_compliant), and document findings. "
        "Produce a structured audit report with per-checkpoint scores."
    ),
    phase="create",
)

audit_report = engine.step(audit_message)
print(f"Audit report: {len(audit_report.content)} chars")

Phase 3: Privacy Assessment

The Privacy Taxonomy Expert reviews privacy-specific controls:

privacy_message = SimulationMessage(
    sender="GCA",
    receiver="PTE",
    content=(
        f"Review the privacy controls for: {audit_subject}.\n\n"
        f"Audit findings so far:\n{audit_report.content[:500]}\n\n"
        "Assess: data classification, consent management, "
        "data minimization, purpose limitation, cross-border transfer "
        "controls, and data subject rights implementation. "
        "Rate each control on a 0-1 scale."
    ),
    phase="critique",
)

privacy_assessment = engine.step(privacy_message)
print(f"Privacy assessment: {len(privacy_assessment.content)} chars")

Phase 4: Misinformation Risk Review

The Anti-fact Mitigation Specialist checks for misinformation risks:

ams_message = SimulationMessage(
    sender="GCA",
    receiver="AMS",
    content=(
        f"Review the data processing pipeline for misinformation risks:\n\n"
        f"System: {audit_subject}\n"
        f"Audit context:\n{audit_report.content[:300]}\n\n"
        "Assess: data source reliability, output validation controls, "
        "bias detection mechanisms, and factual accuracy safeguards. "
        "Flag any high-risk areas requiring immediate remediation."
    ),
    phase="critique",
)

misinfo_review = engine.step(ams_message)
print(f"Misinformation review: {len(misinfo_review.content)} chars")

Compliance Scoring

Aggregate scores from all audit phases:

from fcc.collaboration.scoring import ScoringEngine

scorer = ScoringEngine()

scores = {
    "governance_audit": scorer.score_text(audit_report.content),
    "privacy_assessment": scorer.score_text(privacy_assessment.content),
    "misinfo_review": scorer.score_text(misinfo_review.content),
}

overall_score = sum(scores.values()) / len(scores)
print(f"\nCompliance Scores:")
for area, score in scores.items():
    print(f"  {area}: {score:.2f}")
print(f"  Overall: {overall_score:.2f}")

Escalation Decision

Determine whether findings require escalation:

escalation_threshold = 0.6

escalations = []
for area, score in scores.items():
    if score < escalation_threshold:
        escalations.append({
            "area": area,
            "score": score,
            "severity": "critical" if score < 0.4 else "warning",
        })

if escalations:
    print(f"\nEscalation required ({len(escalations)} findings):")
    for esc in escalations:
        print(f"  [{esc['severity'].upper()}] {esc['area']}: "
              f"score={esc['score']:.2f}")

    bus.publish(Event(
        event_type=EventType.COLLABORATION_GATE_DECIDED,
        source="governance_audit.escalation",
        payload={
            "status": "escalated",
            "escalations": escalations,
            "overall_score": overall_score,
        },
    ))
else:
    print("\nNo escalations required. Audit passed.")
    bus.publish(Event(
        event_type=EventType.COLLABORATION_GATE_DECIDED,
        source="governance_audit.approval",
        payload={"status": "approved", "overall_score": overall_score},
    ))

Quality Gate Enforcement

Apply a formal approval gate:

gate = ApprovalGate(
    gate_id="compliance_audit_gate",
    gate_name="Compliance Audit Approval",
    required_approvers=("DGS", "GCA"),
    status=(
        ApprovalStatus.APPROVED
        if overall_score >= 0.7
        else ApprovalStatus.REJECTED
    ),
)

print(f"\nGate: {gate.gate_name}")
print(f"Status: {gate.status.value}")
print(f"Required approvers: {gate.required_approvers}")

Exercises

  1. Add remediation: When escalations occur, route findings back to the responsible persona for remediation before re-audit.
  2. Historical tracking: Use the ChangeTracker to record audit results over time and compare across versions.
  3. Multi-tier review: Add a Layered Review pattern with DGS as Tier 1, GCA as Tier 2, and an external auditor as Tier 3.
  4. Constitution enforcement: Use the ConstitutionRegistry to verify that each persona is operating within its constitutional constraints during the audit.

Summary

In this scenario you executed a governance audit workflow:

  • DGS initiated the audit with scope and criteria
  • GCA conducted the formal audit with evidence gathering
  • PTE assessed privacy-specific controls
  • AMS reviewed misinformation risks
  • Compliance scores were aggregated with escalation decisions
  • A formal approval gate enforced the outcome

Next Steps