Instruction Tuning of Large Language Models - Lecture
Center for Language & Speech Processing(CLSP), JHU via YouTube
Overview
Syllabus
Intro
ChatGPT/GPT4 are real generalists
How did models acquire the vast capabilities?
NLP before 2018: building task-specific models
Classical multi-task learning
Generalization to unseen tasks via instructions
Expert-written instructions for all tasks
Strict train/test split for cross-task generalization
Instruction tuning significantly improves LLMs
What are the most important factors?
Other models trained on existing NLP datasets
Data is OpenAl's secret weapon
Can we construct a similar instruction dataset by crowdsourcing?
LLMs can be prompted to generate instructions
LM can be prompted to generate instances
Instruction data generation pipeline
Generating 52K instructions with GPT3
Tasks generated by GPT3
Data quality review
Performance on SuperNI
Expert evaluation on 252 user-oriented instructions
Effect of data size and data quality (using human eval)
Takeaways
Licensing concern about using OpenAl output?
Taught by
Center for Language & Speech Processing(CLSP), JHU