Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Are All Features Created Equal? - Aleksander Madry

Institute for Advanced Study via YouTube

Overview

Explore the intricacies of machine learning features in this thought-provoking lecture by Aleksander Madry from MIT. Delve into the success of deep learning and examine the key phenomenon of adversarial perturbations. Analyze machine learning through the lens of adversarial robustness, comparing human and ML model perspectives. Investigate the robust features model and its implications for data efficiency, perception alignment, and image synthesis. Discover how robustness can lead to better representations and explore the potential of counterfactual analysis with robust models. Gain insights into why adversarial examples arise from non-robust features in data and consider the broader implications for the field of machine learning.

Syllabus

Intro
Machine Learning: A Success Story
Why Do We Love Deep Learning?
Key Phenomenon: Adversarial Perturbations
ML via Adversarial Robustness Lens
But: "How"/"what" does not tell us "why"
Why Are Adv. Perturbations Bad?
Human Perspective
ML Perspective
A Simple Experiment
The Robust Features Model
The Simple Experiment: A Second Look
Human vs ML Model Priors
New capability: Robustification
Some Direct Consequences
Robustness and Data Efficiency
Robustness + Perception Alignment
Robustness → Better Representations
Robustness + Image Synthesis
Problem: Correlations can be weird
Useful tool(?): Counterfactual Analysis with Robust Models
Adversarial examples arise from non-robust features in the data

Taught by

Institute for Advanced Study

Reviews

Start your review of Are All Features Created Equal? - Aleksander Madry

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.