Overview
Explore a conference talk examining the robustness of Deep k-Nearest Neighbors (DkNN) as a defense against adversarial examples in machine learning. Delve into the challenges of evaluating DkNN's effectiveness and learn about a proposed heuristic attack that utilizes gradient descent to find adversarial examples for k-Nearest Neighbor (kNN) classifiers. Discover how this attack performs against both kNN and DkNN defenses, with results suggesting it outperforms naive attacks on kNN and other attacks on DkNN. Gain insights into the ongoing research in adversarial machine learning and the complexities of developing robust defense mechanisms.
Syllabus
Introduction
Why are we interested
What is KNearest Neighbor
Mean Attack
Optimization Problem
Results
Attacking occurrence neighbor
DPS
adversarial input
samples
summary
improvement
Questions
Taught by
IEEE Symposium on Security and Privacy