Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the privacy implications of using large language models (LLMs) in interactive settings through this 42-minute Google TechTalk presented by Niloofar Mireshghallah and Hyunwoo Kim from the University of Washington. Delve into a new set of inference-time privacy risks that arise when LLMs are fed information from multiple sources and expected to reason about what to share in their outputs. Examine the limitations of existing evaluation frameworks in capturing the nuances of these privacy challenges. Gain insights into future research directions for improved auditing of models for privacy risks and developing more effective mitigation strategies.
Syllabus
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models
Taught by
Google TechTalks