Overview
Explore the limitations of large language models (LLMs) in this insightful DS4DM Coffee Talk presented by Sarath Chandar from Polytechnique Montréal, Canada. Delve into the effects of using LLMs as task solvers, examining the types of knowledge they can encode and their efficiency in utilizing this knowledge for downstream tasks. Investigate the susceptibility of LLMs to catastrophic forgetting when learning multiple tasks, and learn about methods for identifying and eliminating biases encoded within these models. Gain a comprehensive overview of various research projects addressing these critical questions, shedding light on the current limitations of LLMs and providing insights into building more intelligent systems for the future.
Syllabus
Limitations of Large Language Models, Sarath Chandar
Taught by
GERAD Research Center