Overview
Explore a 15-minute conference talk that delves into a novel approach for purifying backdoors in deep learning models. Learn about the Self Attention Distillation technique proposed by researchers from Wuhan University, Zhejiang University, and Xi'an Jiaotong University. Discover how this method, named REDEEM MYSELF, addresses the critical issue of backdoor attacks in deep learning systems. Gain insights into the potential applications and implications of this purification technique for enhancing the security and reliability of AI models.
Syllabus
REDEEM MYSELF: Purifying Backdoors in Deep Learning Models using Self Attention Distillation
Taught by
IEEE Symposium on Security and Privacy