Typical Decoding for Natural Language Generation - Get More Human-Like Outputs From Language Models

Typical Decoding for Natural Language Generation - Get More Human-Like Outputs From Language Models

Yannic Kilcher via YouTube Direct link

- Sponsor: Fully Connected by Weights & Biases

2 of 13

2 of 13

- Sponsor: Fully Connected by Weights & Biases

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Typical Decoding for Natural Language Generation - Get More Human-Like Outputs From Language Models

Automatically move to the next video in the Classroom when playback concludes

  1. 1 - Intro
  2. 2 - Sponsor: Fully Connected by Weights & Biases
  3. 3 - Paper Overview
  4. 4 - What's the problem with sampling?
  5. 5 - Beam Search: The good and the bad
  6. 6 - Top-k and Nucleus Sampling
  7. 7 - Why the most likely things might not be the best
  8. 8 - The expected information content of the next word
  9. 9 - How to trade off information and likelihood
  10. 10 - Connections to information theory and psycholinguistics
  11. 11 - Introducing Typical Sampling
  12. 12 - Experimental Evaluation
  13. 13 - My thoughts on this paper

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.