Explainability Engineer — Refactor Workflow¶
Description: Improve existing artifact structure and quality
When to Use¶
Use the refactor workflow when you need to improve existing artifact structure and quality.
Input Requirements¶
- AI model architectures and training documentation
- Feature importance scores and SHAP/LIME attribution outputs
- Model cards and datasheets for datasets
- User personas and explanation audience profiles
Process¶
- Initialize — Set up the refactor context for Explainability Engineer
- Execute — Perform the refactor operation following Explainability Engineer's style
- Validate — Check output against quality gates
- Handoff — Deliver results to downstream personas
Output¶
- Model cards with performance, limitations, and ethical considerations
- Feature attribution reports with audience-appropriate visualizations
- Layered explanation documents (technical, practitioner, end-user tiers)
- Explainability test results validating explanation fidelity
Quality Gates¶
- Explanations must be calibrated for target audience comprehension level
- Model cards must follow the Mitchell et al. (2019) template structure
- Feature attributions must use validated XAI methods (SHAP, LIME, Integrated Gradients)
- High-risk AI decisions must have individual-level explanations available