Skip to content

Ethical Review

An ethical review evaluates the fairness, privacy, safety, transparency, and societal-impact posture of an AI system, experiment, dataset, or model before deployment or publication. It is the FCC equivalent of an Institutional Review Board (IRB) check, extended to cover AI-specific concerns (disparate impact, dual-use, explainability gaps). Produce this artifact during the Critique phase for any artifact that processes personal data, makes decisions affecting individuals, or could be repurposed for harmful applications.

Template

Section 1: Review Metadata

Instructions: Ethical reviews are never anonymous. Record type (Pre-deployment / Pre-publication / Periodic / Incident-triggered), reviewer identity, date, and Innovation / Ecosystem ID.

Field Value
Review ID [FILL — e.g. ETH-2026-001]
Reviewer [FILL]
Date [FILL]
Review type [Pre-deployment / Pre-publication / Periodic / Incident-triggered]
Innovation / Ecosystem ID [FILL]

Section 2: Artifact & Intended Use

Instructions: A well-scoped intended use and explicit out-of-scope uses are the anchor for every subsequent ethical judgement. Vague intended-use statements invalidate the review.

Field Value
Artifact name / version [FILL]
Artifact type [Model / Dataset / Experiment / System / Agent / Other]
Authors [FILL]
Intended use [FILL]
Intended users [FILL]
Out-of-scope uses [FILL]

Section 3: Risk Classification

Instructions: Tick the applicable EU AI Act risk tier and complete the NIST AI RMF Govern / Map / Measure / Manage rows. Tier Unacceptable is a hard-stop: the artifact cannot proceed.

  • EU AI Act tier: [Unacceptable (hard-stop) / High / Limited / Minimal]
  • NIST AI RMF — Govern: [FILL]
  • NIST AI RMF — Map: [FILL]
  • NIST AI RMF — Measure: [FILL]
  • NIST AI RMF — Manage: [FILL]

Section 4: Fairness, Bias, and Privacy

Instructions: Tick each item only when the corresponding evidence exists. Sensitive data requires an affirmative DPIA reference.

  • Training / evaluation demographics documented
  • Known biases identified and mitigated
  • Fairness metrics defined and measured
  • Disparate-impact analysis conducted (if applicable)
  • Personal-data inventory completed
  • Data minimisation applied
  • Consent / legal basis documented
  • Anonymisation / pseudonymisation where required
  • DPIA completed or explicitly waived

Section 5: Safety, Robustness, Transparency

Instructions: Safety covers failure modes, human oversight, and rollback. Transparency is linked to the Model Card (OPEN-SCI-004a) and — for agents — the Agent Transparency Card (OPEN-SCI-009).

  • Failure modes identified and documented
  • Adversarial robustness tested where applicable
  • Fallback / kill-switch mechanisms available
  • Human-in-the-loop oversight defined
  • Model Card completed (link to OPEN-SCI-004a)
  • LLM usage declared (models, tasks, limitations)
  • Agent Transparency Card completed (link to OPEN-SCI-009)

Section 6: Societal Impact, Dual Use, and Decision

Instructions: Enumerate positive and negative impacts (with likelihood and severity) and declare dual-use safeguards. The four decision labels are mutually exclusive; Approved with conditions requires explicit enumerate conditions.

  • Positive impacts: [FILL]
  • Negative impacts (likelihood / severity / mitigation): [FILL]
  • Dual-use considerations + safeguards: [FILL]
  • Decision: [Approved / Approved with conditions / Requires revision / Rejected]
  • Conditions (if applicable): [FILL]
  • Next review date: [FILL]

Adoption Checklist

  • All required sections completed
  • Artifact peer-reviewed by at least one R.I.S.C.E.A.R. peer
  • Stored in the project's designated docs location
  • Linked from README or equivalent index
  • Versioned + date-stamped with a scheduled re-review

References

  • PHOENIX v4.0.0 — docs/resources/templates/open-science/ethics-review.md
  • IEEE 7000-2021 — Ethical Design of Autonomous Systems
  • NIST AI RMF 1.0 (2023) — AI Risk Management Framework
  • EU AI Act (Regulation 2024/1689) — Risk Classification
  • Montreal Declaration for Responsible AI (2018)

FCC integration

This template is referenced from the Forensic Auditor persona (src/fcc/data/personas/forensic_auditor.yaml) as part of the Critique-phase evidence set. Ethical reviews are cross-linked by the compliance auditor under src/fcc/compliance/auditor.py to their EU AI Act and NIST AI RMF requirements in src/fcc/data/compliance/eu_ai_act_requirements.yaml and src/fcc/data/compliance/nist_ai_rmf_mapping.yaml. See also src/fcc/data/governance/ethics_framework.yaml and src/fcc/data/governance/ethics_assessment.yaml.