Explainability Engineer — Compare Workflow¶
Description: Evaluate multiple approaches or versions
When to Use¶
Use the compare workflow when you need to evaluate multiple approaches or versions.
Input Requirements¶
- AI model architectures and training documentation
- Feature importance scores and SHAP/LIME attribution outputs
- Model cards and datasheets for datasets
- User personas and explanation audience profiles
Process¶
- Initialize — Set up the compare context for Explainability Engineer
- Execute — Perform the compare operation following Explainability Engineer's style
- Validate — Check output against quality gates
- Handoff — Deliver results to downstream personas
Output¶
- Model cards with performance, limitations, and ethical considerations
- Feature attribution reports with audience-appropriate visualizations
- Layered explanation documents (technical, practitioner, end-user tiers)
- Explainability test results validating explanation fidelity
Quality Gates¶
- Explanations must be calibrated for target audience comprehension level
- Model cards must follow the Mitchell et al. (2019) template structure
- Feature attributions must use validated XAI methods (SHAP, LIME, Integrated Gradients)
- High-risk AI decisions must have individual-level explanations available