Completed
Failure Modes in Machine Learning
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
AI Security Engineering - Modeling - Detecting - Mitigating New Vulnerabilities
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Customer Compromise via Adversarial ML-Case Study
- 3 Higher Order Bias/Fairness, Physical Safety & Reliability concerns stem from unmitigated Security and Privacy Threats
- 4 Adversarial Audio Examples
- 5 Failure Modes in Machine Learning
- 6 Adversarial Attack Classification
- 7 Data Poisoning: Attacking Model Availability
- 8 Data Poisoning: Attacking Model Integrity
- 9 Poisoning Model Integrity: Attack Example
- 10 Proactive Defenses
- 11 Threat Taxonomy
- 12 Adversarial Goals
- 13 A Race Between Attacks and Defenses
- 14 Ideal Provable Defense
- 15 Build upon the Details: Security Best Practices
- 16 Define lower/upper bounds of data input and output
- 17 Threat Modeling Al/ML Systems and Dependencies
- 18 Wrapping Up
- 19 AI/ML Pivots to the SDL Bug Bar