Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

REDEEM MYSELF - Purifying Backdoors in Deep Learning Models Using Self Attention Distillation

IEEE via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 15-minute conference talk that delves into a novel approach for purifying backdoors in deep learning models. Learn about the Self Attention Distillation technique proposed by researchers from Wuhan University, Zhejiang University, and Xi'an Jiaotong University. Discover how this method, named REDEEM MYSELF, addresses the critical issue of backdoor attacks in deep learning systems. Gain insights into the potential applications and implications of this purification technique for enhancing the security and reliability of AI models.

Syllabus

REDEEM MYSELF: Purifying Backdoors in Deep Learning Models using Self Attention Distillation

Taught by

IEEE Symposium on Security and Privacy

Reviews

Start your review of REDEEM MYSELF - Purifying Backdoors in Deep Learning Models Using Self Attention Distillation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.