Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical gaps in differential privacy effectiveness against sophisticated threats in interest-based advertising through this 10-minute conference talk from PEPR '24. Delve into Privacy Attacks and their importance in assessing privacy, while examining the potential of Large Language Models (LLMs) as formidable attackers. Analyze Google's Topics API, a pioneering effort in balancing user privacy with advertising needs, to identify and quantify vulnerabilities to re-identification and membership inference attacks. Discover how practical simulations expose edge cases and niche topics within the API that amplify re-identification risks. Learn about the innovative use of LLMs to simulate attacks, revealing heightened accuracy in user re-identification that challenges the API's privacy claims. Gain insights into the pressing need for the Privacy-Enhancing Technologies (PETs) community to evaluate the resilience of privacy technologies against LLM-driven threats, ensuring mechanisms like the Topics API can withstand evolving digital privacy risks.