Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

When Calibration Goes Awry: Hallucination in Language Models

Simons Institute via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the phenomenon of hallucinations in language models through this insightful lecture by Adam Kalai from OpenAI. Delve into how calibration, a process naturally encouraged during pre-training, can lead to unexpected hallucinations. Examine the relationship between hallucination rates and domains using the Good-Turing estimator, with a particular focus on notorious sources like paper titles. Gain valuable insights into potential methods for mitigating hallucinations in AI language models. This hour-long talk, part of the Emerging Generalization Settings series at the Simons Institute, presents joint research with Santosh Vempala conducted while Kalai was at Microsoft Research New England.

Syllabus

When calibration goes awry: hallucination in language models

Taught by

Simons Institute

Reviews

Start your review of When Calibration Goes Awry: Hallucination in Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.