Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models - Session 335

IEEE via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a cutting-edge research presentation on prompt-specific poisoning attacks targeting text-to-image generative models. Delve into the innovative "Nightshade" technique, which introduces a novel approach to manipulating AI-generated images. Gain insights into the potential vulnerabilities of popular image generation models and understand the implications for AI security and ethics. Learn about the methodology, experimental results, and potential countermeasures discussed by researcher Shawn Shan in this thought-provoking IEEE conference talk.

Syllabus

335 Nightshade Prompt Specific Poisoning Attacks on Text to Image Generative Models Shawn Shan

Taught by

IEEE Symposium on Security and Privacy

Reviews

Start your review of Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models - Session 335

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.