Overview
Explore a 40-minute video lecture on the concept of adversarial examples in machine learning, presented by Yannic Kilcher. Delve into the groundbreaking research that challenges the conventional understanding of these examples as mere bugs, instead proposing they are inherent features of the data. Examine the theoretical framework of non-robust features, their prevalence in standard datasets, and the misalignment between human-specified robustness and data geometry. Gain insights into the creation of adversarial examples, their implications for AI systems, and address potential criticisms of this novel perspective.
Syllabus
Intro
What is an adversarial example
The fundamental idea
Feature definition
Experimental evidence
How do they create
Criticisms
Taught by
Yannic Kilcher