Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Language Models as Zero-Shot Planners - Extracting Actionable Knowledge for Embodied Agents

Yannic Kilcher via YouTube

Overview

Explore a comprehensive video lecture and interview on using large language models as zero-shot planners for embodied agents. Delve into the VirtualHome environment and learn how to translate unstructured language model outputs into structured grammar for interactive environments. Discover techniques for decomposing high-level tasks into actionable steps without additional training. Examine the challenges of plan evaluation and execution, and understand the contributions of this research. Gain insights from the interview with first author Wenlong Huang, covering topics such as model size impact, output refinement, and the effectiveness of Codex. Analyze experimental results and consider future implications for extracting actionable knowledge from language models in embodied AI applications.

Syllabus

- Intro & Overview
- The VirtualHome environment
- The problem of plan evaluation
- Contributions of this paper
- Start of interview
- How to use language models with environments?
- What does model size matter?
- How to fix the large models' outputs?
- Possible improvements to the translation procedure
- Why does Codex perform so well?
- Diving into experimental results
- Future outlook

Taught by

Yannic Kilcher

Reviews

Start your review of Language Models as Zero-Shot Planners - Extracting Actionable Knowledge for Embodied Agents

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.