Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

LLM Safety, Alignment, and Generalization

Simons Institute via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive lecture on the critical aspects of Large Language Model (LLM) safety, alignment, and generalization. Delve into the challenges of ruling out catastrophic harms as LLM capabilities rapidly improve across various domains. Understand the importance of making affirmative safety cases for LLMs and the need to comprehend their motivational structures, especially as they become capable of complex autonomous plans. Examine the necessity for developing a science of LLM generalization to understand how training data influences a model's beliefs and motivations. Learn from Roger Grosse of the University of Toronto as part of the Simons Institute's Special Year on Large Language Models and Transformers: Part 1 Boot Camp.

Syllabus

LLM Safety, Alignment, and Generalization

Taught by

Simons Institute

Reviews

Start your review of LLM Safety, Alignment, and Generalization

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.