Prompt Optimization and Parameter Efficient Fine Tuning for Large Language Models
Toronto Machine Learning Series (TMLS) via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the cutting-edge techniques of prompt optimization and parameter efficient fine-tuning (PEFT) in this 28-minute conference talk from the Toronto Machine Learning Series. Delve into the growing importance of prompting and prompt design as large language models (LLMs) become increasingly generalizable. Discover how well-constructed prompts can significantly enhance LLM performance across various downstream tasks. Examine the challenges of manual prompt optimization and learn about state-of-the-art optimization techniques, including both discrete and continuous approaches. Investigate PEFT methods, with a focus on Adapters and LoRA, and understand how these approaches can match or surpass full-model fine-tuning performance on many tasks. Gain valuable insights from David Emerson, an Applied Machine Learning Scientist at the Vector Institute, as he shares his expertise in this rapidly evolving field of AI research.
Syllabus
Prompt Optimization and Parameter Efficient Fine Tuning
Taught by
Toronto Machine Learning Series (TMLS)