Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

LLM Reasoning Limitations: Understanding Linear Order and Logical Hallucinations

Discover AI via YouTube

Overview

Explore a 16-minute research presentation examining groundbreaking findings from Google DeepMind and Stanford University about the inherent limitations of current Large Language Models in causal reasoning and logic. Delve into how human reasoning processes and their limitations are embedded within LLMs through training on human conversations across online platforms. Learn why AGI remains distant as the presentation breaks down the challenges of rule hallucinations, factual inaccuracies, and linear sequential understanding limitations. Examine detailed research findings from the February 2024 paper "Premise Order Matters in Reasoning with Large Language Models," which demonstrates how LLMs inherit human mathematical and logical constraints. Progress through key topics including linear order reasoning, premise order sensitivity, mathematical reasoning capabilities, key insights, and the phenomenon of logical hallucinations in modern AI systems like Gemini Pro and GPT-4 TURBO.

Syllabus

Intro
Linear order of reasoning
Sensitive to premise order
Maths reasoning
Insights
Logical hallucinations

Taught by

Discover AI

Reviews

Start your review of LLM Reasoning Limitations: Understanding Linear Order and Logical Hallucinations

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.