Explainability
← Back to Responsible AI
Making model decisions interpretable and understandable to humans. Methods: SHAP (game-theoretic feature attribution), LIME (local interpretable explanations), attention visualization, feature importance, model cards.
Related
- Feature Importance (related technique)
- Bias and Fairness (explain to detect bias)