Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Forecasting and Aligning AI - Jacob Steinhardt

Stanford University via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!

Modern ML systems sometimes undergo qualitative shifts in behavior simply by “scaling up” the number of parameters and training examples. Given this, how can we extrapolate the behavior of future ML systems and ensure that they behave safely and are aligned with humans? I’ll argue that we can often study (potential) capabilities of future ML systems through well-controlled experiments run on current systems, and use this as a laboratory for designing alignment techniques. I’ll also discuss some recent work on “medium-term” AI forecasting.

Syllabus

Introduction.
Rest of Talk.
Reward Hacking: Motivation.
Reward Hacking Example.
Reward Hacking: Example.
Summary of Full Results.
Reward Hacking: Summary.
Making NLP Models Truthful.
Contrastive Representation Clustering.
Results on Unified QA.
Caveat: True Answers Work Too.
Forecasting: Motivation.
Forecasting Competition.
Forecasting Questions.
Summary of Benchmark Forecasts.
Results So Far.
Forecasting: Lessons Learned.
Forecasting Class.

Taught by

Stanford Online

Reviews

Start your review of Forecasting and Aligning AI - Jacob Steinhardt

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.