Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Unlocking Reasoning in Large Language Models - Conf42 ML 2023

Conf42 via YouTube

Overview

Explore the intricacies of reasoning in large language models through this comprehensive conference talk. Delve into various techniques for eliciting and measuring reasoning abilities, including chain of thought prompting, program-aided language models, and plan-and-solve prompting. Discover innovative approaches like self-taught reasoners, specializing smaller models for multi-step reasoning, and iterative prompting methods. Learn about advanced concepts such as tool usage, the REACT framework, and the Chameleon model. Gain valuable insights into the current state and future potential of reasoning capabilities in AI language models, with practical examples and further reading recommendations provided.

Syllabus

intro
preface
about logesh
agenda
what is reasoning?
how is reasoning measured in the literature?
eliciting reasoning
chain of thought prompting and self consistency
program-aided language models
plan-and-solve prompting
star: self-taught reasoner bootstrapping reasoning with reasoning
specializing smaller language models towards multi-step reasoning
distilling step-by-step
recursive and iterative prompting
least-to-most prompting
plan, eliminate, and track
describe, explain, plan and select
tool usage
react: reason and act
chameleon
acknowledgement & further reading

Taught by

Conf42

Reviews

Start your review of Unlocking Reasoning in Large Language Models - Conf42 ML 2023

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.