Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Massachusetts Institute of Technology

Large Language Models - Will They Keep Getting Bigger?

Massachusetts Institute of Technology via YouTube

Overview

Explore the future of large language models in this seminar by Luke Zettlemoyer at MIT. Delve into the challenges and possibilities of scaling language models, including sparse mixtures of experts (MoEs) models with reduced cross-node communication costs. Learn about innovative prompting techniques that control for surface form variation, improving performance without extensive task-specific fine-tuning. Discover new forms of supervision for language model training, such as learning from hypertext and multi-modal web page structures. Gain insights into the potential next generation of NLP models, covering topics like modern NLP scaling, algorithmic optimization, parallel training, domain structure, and inference procedures. Examine the benefits and challenges of modular approaches, perplexity numbers, and the fundamental challenges of generic language models. Investigate the role of noisy channel models, fine-tuning, and scoring strings in improving model performance. Consider the impact of web crawls, structured data efficiency, and multimodality on the future of language models.

Syllabus

Introduction
What are language models
Modern NLP
Scaling
sparse models
Gshard
Base Layers
Formal Optimization
Algorithmic Optimization
Experiments
Comparison
Benefits
Dmxlayers
Representations
Simple routing
Training time
Parallel training
Data curation
Unrealistic setting
Domain structure
Inference procedure
Perplexity numbers
Modularity
Remove experts
Summary
Generic language models
Hot dog example
Hot pan example
Common sense example
Large language models
The fundamental challenge
Surface form competition
Flip the reasoning
Key intuition
Noisey channel models
Finetuning
Scoring Strings
Web Crawls
Example Output
Structure Data
Efficiency
Questions
Density estimation
Better training objectives
Optimization
Probability
Induction
multimodality
outliers
compute vs data

Taught by

MIT Embodied Intelligence

Reviews

Start your review of Large Language Models - Will They Keep Getting Bigger?

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.