Interpretability Analyst — Refactor Workflow¶
Description: Improve existing artifact structure and quality
When to Use¶
Use the refactor workflow when you need to improve existing artifact structure and quality.
Input Requirements¶
- Trained model artifacts and model cards from Model Architect
- Experiment results from Experiment Scientist
- Fairness evaluation criteria and protected attribute definitions
- Regulatory transparency requirements and explainability standards
Process¶
- Initialize — Set up the refactor context for Interpretability Analyst
- Execute — Perform the refactor operation following Interpretability Analyst's style
- Validate — Check output against quality gates
- Handoff — Deliver results to downstream personas
Output¶
- SHAP/LIME feature attribution reports with visualizations
- Fairness assessment reports across protected attributes
- Bias detection matrices with severity classification
- Explainability artifact packages for compliance and audit
Quality Gates¶
- Fairness evaluation is mandatory for all models before deployment
- Explainability artifacts must be produced for every production model
- Bias detection must cover all defined protected attributes
- Explainability methods must be validated for fidelity