Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Do Pretrained Transformers Learn In-Context by Gradient Descent?

Center for Language & Speech Processing(CLSP), JHU via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 15-minute conference talk presented by Aayush Mishra at ICML 2024, examining the relationship between In-Context Learning (ICL) and Gradient Descent (GD) in pre-trained language models. Delve into the limitations of previous theoretical connections between ICL and GD, highlighting the differences between experimental setups and real-world language model training. Analyze the speaker's findings on the divergent sensitivities of ICL and GD to demonstration order, and examine comprehensive empirical analyses conducted on the LLaMa-7B model. Gain insights into how ICL and GD differently modify output distributions in language models, and understand why the equivalence between these two concepts remains an open hypothesis requiring further investigation.

Syllabus

Do pretrained Transformers Learn In-Context by Gradient Descent? Aayush Mishra (ICML 2024)

Taught by

Center for Language & Speech Processing(CLSP), JHU

Reviews

Start your review of Do Pretrained Transformers Learn In-Context by Gradient Descent?

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.