Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the vulnerabilities of machine learning pipelines to model backdoors in this 35-minute conference talk from BSidesLV. Delve into the concept of incubated ML exploits, where attackers inject backdoors using input-handling bugs in ML tools. Learn about the systematic exploitation of ML model serialization bugs in popular tools to construct backdoors, including the development of malicious artifacts like polyglot and ambiguous files. Discover the contributions made to Fickling, a pickle security tool designed for ML use cases. Gain insights into the guidelines formulated for security researchers and ML practitioners. Understand how incubated ML exploits represent a new class of threats that emphasize the need for a comprehensive approach to ML security by combining system security issues with model vulnerabilities.