Inside the Black Box of AI Reasoning - Understanding LLM Knowledge and Complex Problem-Solving
Discover AI via YouTube
Overview
Explore a 32-minute video presentation that delves into the intricate reasoning mechanisms of Large Language Models (LLMs) through a novel graph-based framework. Learn about DEPTHQA, an innovative dataset that breaks down complex questions into hierarchical sub-questions across three knowledge levels: factual/conceptual, procedural, and strategic. Discover how researchers measure LLM performance using forward and backward discrepancies to identify gaps in knowledge integration and reasoning capabilities. Examine the correlation between model size and reasoning abilities, with findings showing larger models demonstrate better knowledge integration across different complexity levels. Understand the "predict solution" strategy that enhances self-referential consistency and depth in AI reasoning. Based on research from "Investigating How Large Language Models Leverage Internal Knowledge to Perform Complex Reasoning," gain valuable insights into structured reasoning paths and iterative, context-aware processing that improve LLM effectiveness in complex problem-solving scenarios.
Syllabus
Inside the Black Box of AI Reasoning
Taught by
Discover AI