Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Language Models as World Models? - Understanding Representations and Semantic Control

Simons Institute via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intriguing question of whether language models can truly represent the world described in text in this thought-provoking talk by Jacob Andreas from MIT. Delve into recent research examining how transformer language models encode interpretable and controllable representations of facts and situations. Discover evidence from probing experiments suggesting that language model representations contain rudimentary information about entity properties and dynamic states, and how these representations influence downstream language generation. Examine the limitations of even the largest language models, including their tendency to hallucinate facts and contradict input text. Learn about the "representation editing" model REMEDI, designed to correct semantic errors by intervening in language model activations. Consider recent experiments that reveal the complexity of accessing and manipulating language models' "knowledge" through simple probes. Gain insights into the ongoing challenges in building transparent and controllable world models for language generation systems.

Syllabus

Language Models as World Models?

Taught by

Simons Institute

Reviews

Start your review of Language Models as World Models? - Understanding Representations and Semantic Control

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.