Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

University of Melbourne

Getting Robust - Securing Neural Networks Against Adversarial Attacks

University of Melbourne via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical topic of securing neural networks against adversarial attacks in this 49-minute seminar presented by Dr. Andrew Cullen, Research Fellow in Adversarial Machine Learning at the University of Melbourne. Delve into the vulnerabilities of machine learning systems to adversarial attacks and learn how these attacks can manipulate model outputs in ways that wouldn't affect human decision-making. Gain insights into various adversarial attacks and defense strategies across different domains, and understand how to incorporate adversarial behavior considerations into research and development work. Cover key concepts such as deep learning applications, deanonymization, accuracy vs. robustness, certified robustness, differential privacy, and training time attacks. Discover practical examples and methods like polytope bounding and test time samples to enhance the security of neural networks.

Syllabus

Introduction
Meet Andrew
Deep Learning Applications
Adversarial Learning
Deanonymization
Tay
Simon Wecker
What is an adversarial attack
Examples of adversarial attacks
Why adversarial attacks exist
Accuracy
Accuracy Robustness
Adversarial Attacks
Adversarial Defense
Certified Robustness
Differential Privacy
Differential Privacy Equation
Other Methods
Example
Polytope Bounding
Test Time Samples
Training Time Attacks
Conclusion

Taught by

The University of Melbourne

Reviews

Start your review of Getting Robust - Securing Neural Networks Against Adversarial Attacks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.