Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad

Harvard CMSA via YouTube

Overview

Explore a 52-minute seminar presentation from Harvard CMSA's New Technologies in Mathematics series featuring EPFL researcher Aryo Lotfi, who delves into the fundamental limitations and capabilities of Transformer models in reasoning tasks. Learn about the concept of 'globality degree' and its role in determining what neural networks can efficiently learn from scratch. Discover why Transformers struggle with composing long chains of syllogisms despite being Turing-complete in expressivity, and understand the three key findings regarding scratchpad approaches: the limitations of agnostic scratchpads, the potential and drawbacks of educated scratchpads, and the promising capabilities of inductive scratchpads in breaking the globality barrier while improving out-of-distribution generalization. Gain insights into how specific inductive scratchpad implementations can achieve up to 6x length generalizations in arithmetic tasks through optimal input formatting.

Syllabus

Aryo Lotfi | How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad

Taught by

Harvard CMSA

Reviews

Start your review of How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.