Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a critical security presentation from USENIX Security '19 that delves into the unintended memorization of sensitive data in neural networks. Learn about a novel testing methodology for assessing the risk of rare or unique training-data sequences being memorized by generative sequence models. Discover the persistent nature of unintended memorization and its potential serious consequences, including the extraction of secret sequences like credit card numbers. Gain insights into practical defense strategies, such as those applied to Google's Smart Compose, to quantitatively limit data exposure in commercial text-completion neural networks trained on millions of users' email messages.