Explore the groundedness of ungrounded Large Language Models (LLMs) in this 49-minute lecture by Trevor Darrell from UC Berkeley. Delve into the intersection of AI, psychology, and neuroscience to gain insights on higher-level intelligence. Examine the capabilities and limitations of current LLM technologies, and discover how these models compare to human cognition and understanding. Gain a deeper understanding of the challenges and potential advancements in creating more grounded AI systems.
Overview
Syllabus
How grounded are ungrounded LLM models
Taught by
Simons Institute