Explaining and Harnessing Adversarial Examples in Machine Learning - Spring 2021

Explaining and Harnessing Adversarial Examples in Machine Learning - Spring 2021

UCF CRCV via YouTube Direct link

Adversarial Training vs L1 weight decay • Training maxout networks on MNIST . Good results using adversarial training with = 0.25

13 of 22

13 of 22

Adversarial Training vs L1 weight decay • Training maxout networks on MNIST . Good results using adversarial training with = 0.25

Class Central Classrooms beta

YouTube playlists curated by Class Central.

Classroom Contents

Explaining and Harnessing Adversarial Examples in Machine Learning - Spring 2021

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Overview
  3. 3 Paper History and Authors
  4. 4 Motivation
  5. 5 Adversarial Examples for Linear Models
  6. 6 Adversarial Example for Non-Linear Models • Is it applicable for nonlinear models?
  7. 7 Summarizing FGSM
  8. 8 Experimental Results ► GSM band attack on Neural network with different activation function
  9. 9 Adversarial Training (AT)
  10. 10 FGSM Attack to a Logistic Regression Model
  11. 11 Adversarial Training for Logistic Regression Model
  12. 12 L1 regularization for Logistic Regression Model • To prevent the overfitting problem
  13. 13 Adversarial Training vs L1 weight decay • Training maxout networks on MNIST . Good results using adversarial training with = 0.25
  14. 14 Adversarial Training of DNN
  15. 15 Adversarial Trained Model
  16. 16 Other Considerations
  17. 17 Why Do Adversarial Examples Generalize?
  18. 18 Generalization of Adversarial Examples
  19. 19 Alternative Hypothesis
  20. 20 Strengths
  21. 21 Weaknesses
  22. 22 Summary

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.