Overview
Syllabus
Intro
Customer Compromise via Adversarial ML-Case Study
Higher Order Bias/Fairness, Physical Safety & Reliability concerns stem from unmitigated Security and Privacy Threats
Adversarial Audio Examples
Failure Modes in Machine Learning
Adversarial Attack Classification
Data Poisoning: Attacking Model Availability
Data Poisoning: Attacking Model Integrity
Poisoning Model Integrity: Attack Example
Proactive Defenses
Threat Taxonomy
Adversarial Goals
A Race Between Attacks and Defenses
Ideal Provable Defense
Build upon the Details: Security Best Practices
Define lower/upper bounds of data input and output
Threat Modeling Al/ML Systems and Dependencies
Wrapping Up
AI/ML Pivots to the SDL Bug Bar
Taught by
RSA Conference