FAIR Self-Assessment¶
A FAIR self-assessment scores a digital artifact — dataset, model, service, or code — against the FAIR principles (Findable, Accessible, Interoperable, Reusable) and the FAIR² extensions for AI-readiness (context-rich metadata, AI-ready design, responsible & verifiable governance, transparency & trust). The assessment produces an ordinal maturity level, an explicit gap list, and an improvement roadmap that downstream Critique personas can track. Produce this artifact during the Critique phase whenever an artifact is published outside its owning team or undergoes a material change to its metadata or distribution surface.
Template¶
Section 1: Assessment Metadata¶
Instructions: Give each assessment a stable ID so scores can be compared over time. Link to related Dataset Cards, Model Cards, and DMPs — these are the evidence the scores lean on.
| Field | Value |
|---|---|
| Assessment ID | [FILL — e.g. FAIR-2026-001] |
| Artifact assessed | [FILL] |
| Artifact type | [Dataset / Model / Service / Code / Other] |
| Assessor | [FILL] |
| Date | [FILL] |
| Related Dataset Card / Model Card / DMP | [FILL] |
Section 2: Scoring Guide¶
Instructions: Each criterion below is scored 0-3. Use the definitions consistently — score creep across assessments makes the maturity trend worthless.
| Score | Meaning |
|---|---|
| 0 | Not addressed |
| 1 | Partially addressed (informal or incomplete) |
| 2 | Largely addressed (documented but not machine-actionable) |
| 3 | Fully addressed (documented, machine-actionable, verifiable) |
Section 3: Findable (F1-F4, max 12)¶
Instructions: Does the artifact have a persistent identifier, rich metadata, self-describing metadata that references its own ID, and registration in a searchable resource?
- F1 Persistent identifier:
[0-3]—[evidence] - F2 Rich metadata:
[0-3]—[evidence] - F3 Metadata includes identifier:
[0-3]—[evidence] - F4 Registered / indexed:
[0-3]—[evidence]
Section 4: Accessible (A1-A2, max 12)¶
Instructions: Retrievable by identifier over an open protocol, with authentication hooks where needed, and with metadata that survives the artifact itself.
- A1 Retrievable by identifier:
[0-3]—[evidence] - A1.1 Open, free, universally implementable protocol:
[0-3] - A1.2 Auth/authz where needed:
[0-3] - A2 Metadata persists beyond artifact:
[0-3]
Section 5: Interoperable & Reusable (I1-I3, R1.*, max 21)¶
Instructions: Interoperability covers formal representations, FAIR vocabularies, and qualified cross-references. Reusability covers licensing, detailed provenance, community standards compliance, and a current DMP.
- I1 Formal knowledge representation:
[0-3] - I2 FAIR vocabularies:
[0-3] - I3 Qualified references to other artifacts:
[0-3] - R1 Clear usage license:
[0-3] - R1.1 Detailed provenance:
[0-3] - R1.2 Community standards met:
[0-3] - R1.3 DMP in place (link to OPEN-SCI-010):
[0-3]
Section 6: FAIR² Extensions & Maturity Level (F²1-F²4, max 12)¶
Instructions: The FAIR² dimensions encode AI-readiness and responsible-AI posture. Map total score to the 5-level maturity scale and publish a roadmap for the next level.
- F²1 Context-rich metadata:
[0-3] - F²2 AI-ready design:
[0-3] - F²3 Responsible & verifiable:
[0-3] - F²4 Transparency & trust:
[0-3]
| Level | Range | Label |
|---|---|---|
| 1 | 0-14 | Initial |
| 2 | 15-28 | Managed |
| 3 | 29-42 | Defined |
| 4 | 43-51 | Measured |
| 5 | 52-57 | Optimised |
Current maturity level: [FILL]. Improvement roadmap (priority / gap /
action / owner / target date): [FILL].
Adoption Checklist¶
- All required sections completed
- Artifact peer-reviewed by at least one R.I.S.C.E.A.R. peer
- Stored in the project's designated docs location
- Linked from README or equivalent index
- Versioned + date-stamped with a scheduled reassessment
References¶
- PHOENIX v4.0.0 —
docs/resources/templates/open-science/fair-assessment.md - FAIR² Open Specification (October 2025)
- Wilkinson, M. D. et al. (2016) — FAIR Guiding Principles, Scientific Data 3
- Huerta, E. A. et al. (2023) — FAIR for AI, Nature Scientific Data 10
- GO FAIR — FAIR Data Self-Assessment Tool (https://www.go-fair.org/)
FCC integration¶
This template is referenced from the Forensic Auditor persona
(src/fcc/data/personas/forensic_auditor.yaml) as part of the
Critique-phase evidence set — every dataset or model under audit needs
a current FAIR self-assessment. The assessment feeds the compliance
auditor's evidence graph under src/fcc/compliance/evidence_graph.py.
See also src/fcc/data/governance/open_science_gates.yaml.