Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to identify and understand incubated ML exploits in machine learning pipelines through this DEF CON 32 security talk. Explore the vulnerabilities of ML model backdoors by examining input-handling bugs in ML tools using a language-theoretic security framework. Discover techniques for exploiting ML model serialization bugs in popular tools to construct backdoors, including the creation of malicious artifacts like polyglot and ambiguous files. Gain insights into the development of Fickling, a pickle security tool specifically designed for ML use cases. Master the concept of chaining system security issues with model vulnerabilities, and understand the guidelines for both security researchers and ML practitioners. Delve into topics such as parser differentials, safe tensors, shotgun parsing, PyTorch polyglots, and hybrid ML exploits while learning the importance of implementing a holistic approach to ML security.
Syllabus
Intro
Who am I
What is an ML Backdoor
Exploit Framework
Input Handling Bugs
ML Ecosystem Characters
NonMinimalist Input Handling Code
Parser Differential
Safe Tensors
Shotgun Parsing
PyTorch Polyglots
Hybrid ML Exploit
Recommendations
Conclusion
Taught by
DEFCONConference