Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Deep Learning Robustness Verification for Few-Pixel Attacks

ACM SIGPLAN via YouTube

Overview

Explore a groundbreaking approach to verifying the robustness of neural networks against few-pixel attacks in this 18-minute video presentation from OOPSLA 2023. Delve into the innovative Calzone method, developed by researchers from Technion, Israel, which offers the first sound and complete analysis for L0 adversarial attacks. Learn how this technique leverages dynamic programming, sampling, and covering designs to efficiently verify network robustness, typically completing within minutes for most cases. Discover how Calzone outperforms existing methods, scaling to handle challenging instances where traditional approaches fail. Gain insights into the importance of robustness verification in deep learning and its implications for creating more secure and reliable neural networks.

Syllabus

[OOPSLA23] Deep Learning Robustness Verification for Few-Pixel Attacks

Taught by

ACM SIGPLAN

Reviews

Start your review of Deep Learning Robustness Verification for Few-Pixel Attacks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.