LLM Reasoning Limitations: Understanding Linear Order and Logical Hallucinations
Discover AI via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 16-minute research presentation examining groundbreaking findings from Google DeepMind and Stanford University about the inherent limitations of current Large Language Models in causal reasoning and logic. Delve into how human reasoning processes and their limitations are embedded within LLMs through training on human conversations across online platforms. Learn why AGI remains distant as the presentation breaks down the challenges of rule hallucinations, factual inaccuracies, and linear sequential understanding limitations. Examine detailed research findings from the February 2024 paper "Premise Order Matters in Reasoning with Large Language Models," which demonstrates how LLMs inherit human mathematical and logical constraints. Progress through key topics including linear order reasoning, premise order sensitivity, mathematical reasoning capabilities, key insights, and the phenomenon of logical hallucinations in modern AI systems like Gemini Pro and GPT-4 TURBO.
Syllabus
Intro
Linear order of reasoning
Sensitive to premise order
Maths reasoning
Insights
Logical hallucinations
Taught by
Discover AI