Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

University of Central Florida

Improving Transferability of Adversarial Examples with Input Diversity - CAP6412 Spring 2021

University of Central Florida via YouTube

Overview

Explore a lecture on improving the transferability of adversarial examples using input diversity in machine learning. Delve into the objectives, transformations, and related work in this field. Examine the methodology behind the family of Fast Gradient Sign Method (FGSM) and diverse input patterns. Understand the relationships between different approaches and learn about attacking ensemble networks. Review experimental setups, including attacks on single and ensemble networks, as well as ablation studies. Gain insights from the NIPS 2017 adversarial competition and draw conclusions on the effectiveness of input diversity in enhancing adversarial example transferability.

Syllabus

Introduction
Objectives
Transformations
Related Work
Methodology
Family of FGSM
Diverse Inputs Patterns Methods
Relationships
Attacking on Ensemble Networks
Experiment - Setup
Attacking on Single Networks
Attacking a Ensemble of Network
Ablation Studies
NIPS 2017 adversarial competition
Conclusion

Taught by

UCF CRCV

Reviews

Start your review of Improving Transferability of Adversarial Examples with Input Diversity - CAP6412 Spring 2021

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.