Instruction Tuning of Large Language Models - Lecture

Instruction Tuning of Large Language Models - Lecture

Center for Language & Speech Processing(CLSP), JHU via YouTube Direct link

LM can be prompted to generate instances

15 of 24

15 of 24

LM can be prompted to generate instances

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Instruction Tuning of Large Language Models - Lecture

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 ChatGPT/GPT4 are real generalists
  3. 3 How did models acquire the vast capabilities?
  4. 4 NLP before 2018: building task-specific models
  5. 5 Classical multi-task learning
  6. 6 Generalization to unseen tasks via instructions
  7. 7 Expert-written instructions for all tasks
  8. 8 Strict train/test split for cross-task generalization
  9. 9 Instruction tuning significantly improves LLMs
  10. 10 What are the most important factors?
  11. 11 Other models trained on existing NLP datasets
  12. 12 Data is OpenAl's secret weapon
  13. 13 Can we construct a similar instruction dataset by crowdsourcing?
  14. 14 LLMs can be prompted to generate instructions
  15. 15 LM can be prompted to generate instances
  16. 16 Instruction data generation pipeline
  17. 17 Generating 52K instructions with GPT3
  18. 18 Tasks generated by GPT3
  19. 19 Data quality review
  20. 20 Performance on SuperNI
  21. 21 Expert evaluation on 252 user-oriented instructions
  22. 22 Effect of data size and data quality (using human eval)
  23. 23 Takeaways
  24. 24 Licensing concern about using OpenAl output?

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.