Completed
[] Task clustering mixing with existing data sets
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Fine-Tuning LLMs: Best Practices and When to Go Small - Lecture 124
Automatically move to the next video in the Classroom when playback concludes
- 1 [] Introduction to Mark Kim-Huang
- 2 [] Join the LLMs in Production Conference Part 2 on June 15-16!
- 3 [] Fine-Tuning LLMs: Best Practices and When to Go Small
- 4 [] Model approaches
- 5 [] You might think that you could just use OpenAI but only older base models are available
- 6 [] Why custom LLMs over closed source models?
- 7 [] Small models work well for simple tasks
- 8 [] Types of Fine-Tuning
- 9 [] Strategies for improving fine-tuning performance
- 10 [] Challenges
- 11 [] Define your task
- 12 [] Task framework
- 13 [] Defining tasks
- 14 [] Clustering task diversifies training data and improves out-of-domain performance
- 15 [] Prompt engineering
- 16 [] Constructing a prompt
- 17 [] Synthesize more data
- 18 [] Constructing a prompt
- 19 [] Increase fine-tuning efficiency with LoRa
- 20 [] Naive data parallelism with mixed precision is inefficient
- 21 [] Further reading on mixed precision
- 22 [] Parameter efficient fine-tuning with LoRa
- 23 [] LoRa Data Parallel with Mixed Precision
- 24 [] Summary
- 25 [] Q&A
- 26 [] Mark's journey to LLMs
- 27 [] Task clustering mixing with existing data sets
- 28 [] LangChain Auto Evaluator evaluating LLMs
- 29 [] Cloud platforms costs
- 30 [] Vector database used at Preemo
- 31 [] Finding a reasoning path of a model on Prompting
- 32 [] When to fine-tune versus prompting with a context window
- 33 [] Wrap up