Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Can LLMs Keep a Secret? Testing Privacy Implications of Language Models

Google TechTalks via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the privacy implications of using large language models (LLMs) in interactive settings through this 42-minute Google TechTalk presented by Niloofar Mireshghallah and Hyunwoo Kim from the University of Washington. Delve into a new set of inference-time privacy risks that arise when LLMs are fed information from multiple sources and expected to reason about what to share in their outputs. Examine the limitations of existing evaluation frameworks in capturing the nuances of these privacy challenges. Gain insights into future research directions for improved auditing of models for privacy risks and developing more effective mitigation strategies.

Syllabus

Can LLMs Keep a Secret? Testing Privacy Implications of Language Models

Taught by

Google TechTalks

Reviews

Start your review of Can LLMs Keep a Secret? Testing Privacy Implications of Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.