Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

LLaRA: Supercharging Robot Learning Data for Vision-Language Policy

Launchpad via YouTube

Overview

Discover the groundbreaking LLaRA framework in this 16-minute video presentation by the Fellowship.ai team. Delve into the innovative approach of enhancing robotic action policy through Large Language Models (LLMs) and Vision-Language Models (VLMs). Learn how LLaRA formulates robot actions as conversation-style instruction-response pairs and improves decision-making by incorporating auxiliary data. Explore the process of training VLMs with visual-textual prompts and the automated pipeline for generating high-quality robotics instruction data from existing behavior cloning datasets. Gain insights into how this framework enables optimal policy decisions for robotic tasks, showcasing state-of-the-art performance in both simulated and real-world environments. Access the code, datasets, and pretrained models on GitHub to further your understanding of this cutting-edge AI innovation in robot learning.

Syllabus

Fellowship: LLaRA, Supercharging Robot Learning Data for Vision-Language Policy

Taught by

Launchpad

Reviews

Start your review of LLaRA: Supercharging Robot Learning Data for Vision-Language Policy

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.