Scaling Laws of Formal Reasoning in Large Language Models - Lecture 7
MICDE University of Michigan via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical advancements in improving Large Language Models' (LLMs) formal reasoning abilities for scientific applications in this 28-minute conference talk. Delve into two key research directions: the introduction of Llemma, a foundation model specifically designed for mathematics, and the concept of "easy-to-hard" generalization. Learn how Llemma leverages the extensive Proofpile II corpus to enhance the relationship between training compute and reasoning ability, resulting in significant accuracy improvements. Discover the potential of training strong evaluator models to facilitate generalization to more complex problems. Gain insights into the importance of scaling high-quality data collection and further algorithmic development for enhancing formal reasoning capabilities in LLMs.
Syllabus
07. SciFM24 Sean Welleck: Scaling Laws of Formal Reasoning
Taught by
MICDE University of Michigan