Robustness
← Back to Responsible AI
Ensuring models perform reliably under adversarial conditions and distribution shift. Covers adversarial examples (inputs designed to fool models), distribution shift (test data differs from training), and out-of-distribution detection.
Related
- Data Drift (distribution shift)
- AI Safety (robustness as safety requirement)