Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

How to Steer Foundation Models - Techniques for Optimizing Language and Image Tasks

Harvard CMSA via YouTube

Overview

Learn how to effectively steer and control foundation models and large language models (LLMs) in this 59-minute seminar presentation from the Harvard CMSA New Technologies in Mathematics series. University of Toronto's Jimmy Ba explores techniques for optimizing model performance through better prompting strategies, moving beyond manual trial-and-error approaches. Discover methods for automatic prompt engineering, zero-shot chain-of-thought prompting, and strategies to enhance model truthfulness and informativeness. Examine practical applications in language and image tasks, including antonym identification, image classification, and out-of-distribution data handling. Gain insights into visualization techniques, dataset comparisons, and the latest developments in making foundation models more reliable and effective for downstream applications.

Syllabus

Intro
Internet-scale Generative Models
Instruction Matters
Prompt Matters
Warm-up example
Automatic Prompt Engineer (APE)
Example - Find the antonyms
LLMs Are Human-Level Prompt Engineers
Zero-shot Chain-of-Thought
Can we find better zero-shot CoT
Steer LLMs to be more Truthful and Informative
Which of the images are generate?
Improve image classification with foundation model
Steering generators with out-of-distribution data
Generate more data by interpolation
Visualization on standard benchmarks
Comparison with real dataset

Taught by

Harvard CMSA

Reviews

Start your review of How to Steer Foundation Models - Techniques for Optimizing Language and Image Tasks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.