Mission: Impossible - Testing Language Models on Unnatural Grammar Rules
USC Information Sciences Institute via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Watch a 58-minute research seminar presented by Stanford University PhD student Julie Kallini at USC Information Sciences Institute, exploring whether large language models can truly learn both possible and impossible human languages. Examine experimental evidence challenging Chomsky's claims about LLMs through systematic testing of synthetic impossible languages created by altering English with unnatural word orders and grammar rules. Learn about the development of an impossibility continuum ranging from inherently impossible languages like random word shuffles to linguistically debatable cases involving positional counting rules. Discover how GPT-2 small models perform when learning these impossible languages compared to natural English, with detailed evaluation across different training stages. Gain insights into using LLMs as tools for cognitive and typological investigations in computational linguistics, while understanding the implications for model architecture and interpretability research.
Syllabus
Mission: Impossible Language Models
Taught by
USC Information Sciences Institute