Probabilistic Thinking in Language and Code
Institute for Pure & Applied Mathematics (IPAM) via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Watch a 50-minute research lecture exploring the intersection of Bayesian cognitive models and Large Language Models (LLMs), delivered at UCLA's Institute for Pure & Applied Mathematics. Discover novel approaches to bridging probabilistic reasoning with both natural and programming languages as potential languages-of-thought for human-like representations. Examine a specialized class of Bayesian models integrated with LLMs, understanding how these hybrid systems exhibit more human-like characteristics compared to standalone LLMs or traditional Bayesian cognitive models. Learn about wake-sleep learning techniques for fine-tuning language models to enhance their inductive reasoning capabilities through probabilistic inference amortization. Presented by Cornell University researcher Kevin Ellis at the Naturalistic Approaches to Artificial Intelligence Workshop.
Syllabus
Kevin Ellis - Probabilistic Thinking in Language and Code - IPAM at UCLA
Taught by
Institute for Pure & Applied Mathematics (IPAM)