Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

How Can Large Language Models Learn From Humans? - Understanding Psycholinguistic Predictions and Sheaf Theory

Institute for Pure & Applied Mathematics (IPAM) via YouTube

Overview

Watch a 49-minute lecture from the IPAM Naturalistic Approaches to Artificial Intelligence Workshop where University College London's Mehrnoosh Sadrzadeh explores how large language models can learn from human language processing. Delve into groundbreaking research that combines lexical predictions from LLMs with syntactic structures from dependency parsers using sheaf theory. Discover how this novel framework outperforms traditional surprisal measures when tested on garden path sentences, showing stronger correlation with human reading patterns. Learn about the framework's ability to distinguish between easy and hard garden path sentences, and understand its potential implications for developing more human-like language understanding in AI systems. Explore how psycholinguistic insights about human language prediction at both syntactic and lexical levels can be leveraged to create more intuitive and precise language models.

Syllabus

Mehrnoosh Sadrzadeh - How can large language models learn from humans? - IPAM at UCLA

Taught by

Institute for Pure & Applied Mathematics (IPAM)

Reviews

Start your review of How Can Large Language Models Learn From Humans? - Understanding Psycholinguistic Predictions and Sheaf Theory

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.