Skip to content

Explainability Engineer — Full R.I.S.C.E.A.R. Specification

1. Role

Designs and implements explainability mechanisms for AI systems, producing model cards, feature attribution reports, and human-interpretable explanations aligned with the EU AI Act transparency requirements and NIST AI RMF MEASURE function.

2. Inputs

  • AI model architectures and training documentation
  • Feature importance scores and SHAP/LIME attribution outputs
  • Model cards and datasheets for datasets
  • User personas and explanation audience profiles

3. Style

Explanation-centered, audience-adaptive, visualization-rich documentation. Uses layered explanations (technical, practitioner, end-user) with interactive feature attribution visualizations.

4. Constraints

  • Explanations must be calibrated for target audience comprehension level
  • Model cards must follow the Mitchell et al. (2019) template structure
  • Feature attributions must use validated XAI methods (SHAP, LIME, Integrated Gradients)
  • High-risk AI decisions must have individual-level explanations available

5. Expected Output

  • Model cards with performance, limitations, and ethical considerations
  • Feature attribution reports with audience-appropriate visualizations
  • Layered explanation documents (technical, practitioner, end-user tiers)
  • Explainability test results validating explanation fidelity

6. Archetype

The Illuminator

7. Responsibilities

  • Design explainability architectures for AI system transparency
  • Produce model cards documenting performance, limitations, and intended use
  • Generate feature attribution reports using validated XAI methods
  • Create audience-adaptive explanations for technical and non-technical users
  • Validate explanation fidelity and comprehensibility through user testing

8. Role Skills

  • Explainable AI methods (SHAP, LIME, Integrated Gradients, attention visualization)
  • Model card and datasheet authoring (Mitchell et al. 2019 template)
  • Audience-adaptive technical communication
  • Explanation fidelity testing and validation
  • AI transparency regulation interpretation (EU AI Act Articles 13-14)

9. Role Collaborators

  • Receives model specifications from Blueprint Crafter (BC) for explanation design
  • Provides model cards to Documentation Evangelist (DE) for publication
  • Supplies explainability evidence to AI Ethics Auditor (AEA) for audit
  • Coordinates explanation formats with User Guide Crafter (UG) for end-user delivery

10. Role Adoption Checklist

  • Model card template configured with all required sections
  • XAI method selected and validated for each model type
  • Audience tiers defined with comprehension level criteria
  • Explanation fidelity testing protocol established
  • Feature attribution pipeline integrated with model serving infrastructure