Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Provable Robustness Beyond Bound Propagation

Simons Institute via YouTube

Overview

Explore the frontiers of deep learning in this 48-minute lecture by Zico Kolter from Carnegie Mellon University. Delve into the critical topic of provable robustness in deep learning systems, moving beyond traditional bound propagation techniques. Gain insights into adversarial attacks, their significance, and the concept of adversarial robustness. Examine the causes of adversarial examples and evaluate randomization as a potential defense mechanism. Discover the visual intuition behind randomized smoothing and understand its guarantees. Follow the proof of certified robustness, while considering important caveats. Compare the presented approach with previous state-of-the-art methods on CIFAR10 and assess its performance on ImageNet. Enhance your understanding of advanced deep learning concepts and their practical implications in this comprehensive talk from the Simons Institute's "Frontiers of Deep Learning" series.

Syllabus

Intro
Adversarial attacks on deep learning
Why should we care?
Adversarial robustness
How to we strictly upper bound the maximization?
This talk
What causes adversarial examples?
Randomization as a defense?
Visual intuition of randomized smoothing
The randomized smoothing guarantee
Proof of certified robustness (cont)
Caveats (a.k.a. the fine print)
Comparison to previous SOTA on CIFAR10
Performance on ImageNet

Taught by

Simons Institute

Reviews

Start your review of Provable Robustness Beyond Bound Propagation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.