Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Santa Fe Institute

Could a Purely Self-Supervised Foundation Model Achieve Grounded Language Understanding?

Santa Fe Institute via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a thought-provoking lecture by Stanford University Professor Chris Potts examining the potential for purely self-supervised foundation models to achieve grounded language understanding. Delve into topics including classical AI approaches, brain-mimicking systems, conceptions of semantics, and the challenges of behavioral testing for foundation models. Analyze the metaphysics and epistemology of understanding, and discover findings on causal abstraction in large networks. Gain insights into cutting-edge AI research and its implications for language comprehension and artificial intelligence development.

Syllabus

Intro
Could a purely self-supervised Foundation Model achieve grounded language understanding?
Could a Machine Think? Classical Al is unlikely to yield conscious machines, systems that mimic the brain might
A quick summary of "Could a machine think?"
Foundation Models (FMs)
Self-supervision
Two paths to world-class Al chess?
Conceptions of semantics
Bender & Koller 2020: Symbol streams lack crucial information
Multi-modal streams
Metaphysics and epistemology of understanding
Behavioral testing: Tricky with Foundation Models
Internalism at work: Causal abstraction analysis
Findings of causal abstraction in large networks

Taught by

Santa Fe Institute

Reviews

Start your review of Could a Purely Self-Supervised Foundation Model Achieve Grounded Language Understanding?

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.