Skip to content

Interpretability Analyst — Refactor Workflow

Description: Improve existing artifact structure and quality

When to Use

Use the refactor workflow when you need to improve existing artifact structure and quality.

Input Requirements

  • Trained model artifacts and model cards from Model Architect
  • Experiment results from Experiment Scientist
  • Fairness evaluation criteria and protected attribute definitions
  • Regulatory transparency requirements and explainability standards

Process

  1. Initialize — Set up the refactor context for Interpretability Analyst
  2. Execute — Perform the refactor operation following Interpretability Analyst's style
  3. Validate — Check output against quality gates
  4. Handoff — Deliver results to downstream personas

Output

  • SHAP/LIME feature attribution reports with visualizations
  • Fairness assessment reports across protected attributes
  • Bias detection matrices with severity classification
  • Explainability artifact packages for compliance and audit

Quality Gates

  • Fairness evaluation is mandatory for all models before deployment
  • Explainability artifacts must be produced for every production model
  • Bias detection must cover all defined protected attributes
  • Explainability methods must be validated for fidelity