Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Hacking AI - Security & Privacy of Machine Learning Models

Stanford University via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intersection of cybersecurity and machine learning in this Stanford webinar featuring Professor Dan Boneh. Delve into "adversarial machine learning" and examine the stability of machine learning models when faced with adversarial behavior. Learn about machine learning pipelines, adversary examples, and various defense mechanisms including perceptual ad blocking and adversarial noise. Discover the importance of data protection through differential privacy and its impact on accuracy. Investigate transfer learning, CTML, and methods for measuring resistance to adversarial attacks. Gain insights into the challenges of identifying data sources and the potential implications of quantum computing on machine learning security.

Syllabus

Introduction
Machine Learning Pipeline
Adversary Examples
Defenses
adversarial examples
perceptual ad blocking
adversarial noise
data protection
differential privacy
accuracy
privacy
differential privacy level
transfer learning
CTML
Why cant we identify what the data said
Measuring resistance to adversarial attacks
Quantum computing

Taught by

Stanford Online

Reviews

Start your review of Hacking AI - Security & Privacy of Machine Learning Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.