Improving Transferability of Adversarial Examples with Input Diversity - CAP6412 Spring 2021
University of Central Florida via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a lecture on improving the transferability of adversarial examples using input diversity in machine learning. Delve into the objectives, transformations, and related work in this field. Examine the methodology behind the family of Fast Gradient Sign Method (FGSM) and diverse input patterns. Understand the relationships between different approaches and learn about attacking ensemble networks. Review experimental setups, including attacks on single and ensemble networks, as well as ablation studies. Gain insights from the NIPS 2017 adversarial competition and draw conclusions on the effectiveness of input diversity in enhancing adversarial example transferability.
Syllabus
Introduction
Objectives
Transformations
Related Work
Methodology
Family of FGSM
Diverse Inputs Patterns Methods
Relationships
Attacking on Ensemble Networks
Experiment - Setup
Attacking on Single Networks
Attacking a Ensemble of Network
Ablation Studies
NIPS 2017 adversarial competition
Conclusion
Taught by
UCF CRCV