Adaptive Attacks to Adversarial Example Defenses - CAP6412 Spring 2021
University of Central Florida via YouTube
Overview
Explore a comprehensive lecture on adaptive attacks to adversarial example defenses, focusing on various techniques and strategies used in machine learning security. Delve into key concepts such as expectation over transformation, K winners take all, chaos partitioning, and local gradient estimation. Examine the intricacies of noise manipulation, mixup interference, and methods for attacking adversarial examples. Gain valuable insights into the latest developments in this critical area of artificial intelligence and cybersecurity.
Syllabus
Paper details
Outline
Abstract
Introduction
Expectation over transformation
Notation
K winners take all
Chaos partitioning
Attack on K winners
Local gradient estimation
Odds are on
Noise
Other function
Mixup interference
Attacking adversarial examples
Points
Taught by
UCF CRCV