AI Security Engineering - Modeling - Detecting - Mitigating New Vulnerabilities

AI Security Engineering - Modeling - Detecting - Mitigating New Vulnerabilities

RSA Conference via YouTube Direct link

Data Poisoning: Attacking Model Availability

7 of 19

7 of 19

Data Poisoning: Attacking Model Availability

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

AI Security Engineering - Modeling - Detecting - Mitigating New Vulnerabilities

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Customer Compromise via Adversarial ML-Case Study
  3. 3 Higher Order Bias/Fairness, Physical Safety & Reliability concerns stem from unmitigated Security and Privacy Threats
  4. 4 Adversarial Audio Examples
  5. 5 Failure Modes in Machine Learning
  6. 6 Adversarial Attack Classification
  7. 7 Data Poisoning: Attacking Model Availability
  8. 8 Data Poisoning: Attacking Model Integrity
  9. 9 Poisoning Model Integrity: Attack Example
  10. 10 Proactive Defenses
  11. 11 Threat Taxonomy
  12. 12 Adversarial Goals
  13. 13 A Race Between Attacks and Defenses
  14. 14 Ideal Provable Defense
  15. 15 Build upon the Details: Security Best Practices
  16. 16 Define lower/upper bounds of data input and output
  17. 17 Threat Modeling Al/ML Systems and Dependencies
  18. 18 Wrapping Up
  19. 19 AI/ML Pivots to the SDL Bug Bar

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.