Hacking AI - Security & Privacy of Machine Learning Models

Hacking AI - Security & Privacy of Machine Learning Models

Stanford Online via YouTube Direct link

Why cant we identify what the data said

15 of 17

15 of 17

Why cant we identify what the data said

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Hacking AI - Security & Privacy of Machine Learning Models

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Introduction
  2. 2 Machine Learning Pipeline
  3. 3 Adversary Examples
  4. 4 Defenses
  5. 5 adversarial examples
  6. 6 perceptual ad blocking
  7. 7 adversarial noise
  8. 8 data protection
  9. 9 differential privacy
  10. 10 accuracy
  11. 11 privacy
  12. 12 differential privacy level
  13. 13 transfer learning
  14. 14 CTML
  15. 15 Why cant we identify what the data said
  16. 16 Measuring resistance to adversarial attacks
  17. 17 Quantum computing

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.