How to Steer Foundation Models - Techniques for Optimizing Language and Image Tasks

How to Steer Foundation Models - Techniques for Optimizing Language and Image Tasks

Harvard CMSA via YouTube Direct link

LLMs Are Human-Level Prompt Engineers

8 of 17

8 of 17

LLMs Are Human-Level Prompt Engineers

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

How to Steer Foundation Models - Techniques for Optimizing Language and Image Tasks

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Internet-scale Generative Models
  3. 3 Instruction Matters
  4. 4 Prompt Matters
  5. 5 Warm-up example
  6. 6 Automatic Prompt Engineer (APE)
  7. 7 Example - Find the antonyms
  8. 8 LLMs Are Human-Level Prompt Engineers
  9. 9 Zero-shot Chain-of-Thought
  10. 10 Can we find better zero-shot CoT
  11. 11 Steer LLMs to be more Truthful and Informative
  12. 12 Which of the images are generate?
  13. 13 Improve image classification with foundation model
  14. 14 Steering generators with out-of-distribution data
  15. 15 Generate more data by interpolation
  16. 16 Visualization on standard benchmarks
  17. 17 Comparison with real dataset

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.