Explore a comprehensive mathematics seminar presentation from Georgia Tech's Leyan Pan examining the logical reasoning capabilities of Transformer-based Large Language Models (LLMs) through the lens of SAT-solving. Delve into the investigation of Boolean reasoning abilities in decoder-only Transformers using Chain-of-Thought methodology, specifically focusing on their capacity to decide 3-SAT instances within bounded parameters. Learn about the formal expressiveness of Transformer models, understand the equivalence between Chain-of-Thought reasoning and the DPLL SAT-solving algorithm, and discover how 3-SAT formulas and partial assignments can be encoded as vectors for implementation through attention mechanisms. Examine experimental results supporting theoretical predictions while considering the limitations of standard Transformers in solving unbounded length reasoning problems and potential solutions to overcome these constraints.
Overview
Syllabus
Leyan Pan | Can Transformers Reason Logically? A Study in SAT-Solving
Taught by
Harvard CMSA