Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Instruction Tuning of Large Language Models - Lecture

Center for Language & Speech Processing(CLSP), JHU via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the concept of instruction tuning for large language models in this 48-minute lecture by Yizhong Wang from the University of Washington. Delve into the evolution of NLP models, from task-specific approaches to generalist models like ChatGPT and GPT-4. Examine the impact of expert-written instructions and cross-task generalization on model performance. Investigate the factors contributing to LLM improvements, including data quality and quantity. Learn about innovative techniques for generating instruction datasets using GPT-3, and evaluate their effectiveness through performance metrics and expert assessments. Consider the implications of data size and quality on model capabilities, and reflect on potential licensing concerns related to using OpenAI-generated content.

Syllabus

Intro
ChatGPT/GPT4 are real generalists
How did models acquire the vast capabilities?
NLP before 2018: building task-specific models
Classical multi-task learning
Generalization to unseen tasks via instructions
Expert-written instructions for all tasks
Strict train/test split for cross-task generalization
Instruction tuning significantly improves LLMs
What are the most important factors?
Other models trained on existing NLP datasets
Data is OpenAl's secret weapon
Can we construct a similar instruction dataset by crowdsourcing?
LLMs can be prompted to generate instructions
LM can be prompted to generate instances
Instruction data generation pipeline
Generating 52K instructions with GPT3
Tasks generated by GPT3
Data quality review
Performance on SuperNI
Expert evaluation on 252 user-oriented instructions
Effect of data size and data quality (using human eval)
Takeaways
Licensing concern about using OpenAl output?

Taught by

Center for Language & Speech Processing(CLSP), JHU

Reviews

Start your review of Instruction Tuning of Large Language Models - Lecture

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.