PICCOLO - Exposing Complex Backdoors in NLP Transformer Models

PICCOLO - Exposing Complex Backdoors in NLP Transformer Models

IEEE Symposium on Security and Privacy via YouTube Direct link

Structure of NLP Transformers

4 of 12

4 of 12

Structure of NLP Transformers

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

PICCOLO - Exposing Complex Backdoors in NLP Transformer Models

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Backdoor Attacks on NLP Models
  3. 3 Trigger Inversion is highly effective at detecting backdoors in Computer Vision
  4. 4 Structure of NLP Transformers
  5. 5 Challenge I: Input Space is Discrete, and NLP Models are not Differentiable to Input
  6. 6 Proposal to Challenge I: Differentiable Model Transformation
  7. 7 Challenge II: Token Level Optimization cannot Reverse Engineer Complex Words with Multiple Tokens.
  8. 8 Proposal to Challenge II: Word Level Inversion
  9. 9 Overview
  10. 10 Evaluation Setup
  11. 11 Effectiveness
  12. 12 Code Repo

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.