How Can Large Language Models Learn From Humans? - Understanding Psycholinguistic Predictions and Sheaf Theory
Institute for Pure & Applied Mathematics (IPAM) via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Watch a 49-minute lecture from the IPAM Naturalistic Approaches to Artificial Intelligence Workshop where University College London's Mehrnoosh Sadrzadeh explores how large language models can learn from human language processing. Delve into groundbreaking research that combines lexical predictions from LLMs with syntactic structures from dependency parsers using sheaf theory. Discover how this novel framework outperforms traditional surprisal measures when tested on garden path sentences, showing stronger correlation with human reading patterns. Learn about the framework's ability to distinguish between easy and hard garden path sentences, and understand its potential implications for developing more human-like language understanding in AI systems. Explore how psycholinguistic insights about human language prediction at both syntactic and lexical levels can be leveraged to create more intuitive and precise language models.
Syllabus
Mehrnoosh Sadrzadeh - How can large language models learn from humans? - IPAM at UCLA
Taught by
Institute for Pure & Applied Mathematics (IPAM)