Explore the intricacies of knowledge representation in pretrained language models through this 57-minute conference talk by Antoine Bosselut at the Center for Language & Speech Processing (CLSP), JHU. Delve into methods for simulating machine reasoning by localizing and modifying parametric knowledge representations. Discover techniques for uncovering knowledge-critical subnetworks within pretrained language models and learn about RECKONING, a bi-level optimization procedure for dynamic knowledge encoding and reasoning. Gain insights into the challenges and future directions of using internal model mechanisms for reasoning. Bosselut, an assistant professor at EPFL with experience at Stanford University and the Allen Institute for AI, brings expertise in commonsense representation and reasoning to this exploration of advanced NLP concepts.
From Mechanistic Interpretability to Mechanistic Reasoning
Center for Language & Speech Processing(CLSP), JHU via YouTube
Overview
Syllabus
From Mechanistic Interpretability to Mechanistic Reasoning - Antoine Bosselut
Taught by
Center for Language & Speech Processing(CLSP), JHU