Skip to content

Explainability Engineer — Refactor Workflow

Description: Improve existing artifact structure and quality

When to Use

Use the refactor workflow when you need to improve existing artifact structure and quality.

Input Requirements

  • AI model architectures and training documentation
  • Feature importance scores and SHAP/LIME attribution outputs
  • Model cards and datasheets for datasets
  • User personas and explanation audience profiles

Process

  1. Initialize — Set up the refactor context for Explainability Engineer
  2. Execute — Perform the refactor operation following Explainability Engineer's style
  3. Validate — Check output against quality gates
  4. Handoff — Deliver results to downstream personas

Output

  • Model cards with performance, limitations, and ethical considerations
  • Feature attribution reports with audience-appropriate visualizations
  • Layered explanation documents (technical, practitioner, end-user tiers)
  • Explainability test results validating explanation fidelity

Quality Gates

  • Explanations must be calibrated for target audience comprehension level
  • Model cards must follow the Mitchell et al. (2019) template structure
  • Feature attributions must use validated XAI methods (SHAP, LIME, Integrated Gradients)
  • High-risk AI decisions must have individual-level explanations available