Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Uncertainty, Prompting, and Chain-of-Thoughts in Large Language Models - Part 2

UofU Data Science via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn about advanced concepts in AI uncertainty quantification and prompting techniques in this comprehensive lecture. Explore temperature scaling methods and Bayesian approaches to calibration before diving into free-text explanations and chain-of-thought prompting. Master in-context learning (ICL) principles and their reliable implementation, while understanding prompt-based fine-tuning strategies. Examine practical applications through case studies of FLAN-T5 and LLaMA Chat models. Gain insights into how these techniques improve AI model performance and reliability through detailed explanations and real-world examples.

Syllabus

Reminders
Recap of the uncertainty 1st part
Temperature scaling
Bayesian approaches to calibration
Free-text explanations / chain-of-thoughts intro
Prompt-based finetuning
In-context learning ICL
Reliable ICL
Chain-of-thought prompting
FLAN-T5
LLaMA Chat

Taught by

UofU Data Science

Reviews

Start your review of Uncertainty, Prompting, and Chain-of-Thoughts in Large Language Models - Part 2

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.