Uncertainty, Prompting, and Chain-of-Thoughts in Large Language Models - Part 2
Overview
Learn about advanced concepts in AI uncertainty quantification and prompting techniques in this comprehensive lecture. Explore temperature scaling methods and Bayesian approaches to calibration before diving into free-text explanations and chain-of-thought prompting. Master in-context learning (ICL) principles and their reliable implementation, while understanding prompt-based fine-tuning strategies. Examine practical applications through case studies of FLAN-T5 and LLaMA Chat models. Gain insights into how these techniques improve AI model performance and reliability through detailed explanations and real-world examples.
Syllabus
Reminders
Recap of the uncertainty 1st part
Temperature scaling
Bayesian approaches to calibration
Free-text explanations / chain-of-thoughts intro
Prompt-based finetuning
In-context learning ICL
Reliable ICL
Chain-of-thought prompting
FLAN-T5
LLaMA Chat
Taught by
UofU Data Science