Overview
Explore a keynote presentation on attacking optical flow in deep neural networks for automated driving safety. Delve into the vulnerability of state-of-the-art optical flow estimation techniques to adversarial attacks, particularly focusing on patch attacks that can significantly compromise performance. Examine the differences in susceptibility between encoder-decoder and spatial pyramid network architectures. Learn about various types of attacks, including white-box, black-box, and real-world scenarios, as well as their implications for self-driving technology. Gain insights into the importance of robust artificial intelligence in safety-critical applications and the challenges of situational driving and data aggregation.
Syllabus
Intro
Collaborators
Self-Driving must be Robust
Situational Driving
Data Aggregation
Adversarial Attacks on Image Classification
Adversarial Attacks on Semantic Segmentation
Physical Adversarial Attacks
Robust Adversarial Attacks
Adversarial Patch Attacks
Low-Level Perception
Motion Estimation
Variational Optical Flow
Encoder-Decoder Networks
Spatial Pyramid Networks
Motivation
Attacking Optical Flow
White Box Attacks
Black-Box Attacks
Real-World Attack
Zero-Flow Test
Summary
Taught by
Andreas Geiger