Could a Purely Self-Supervised Foundation Model Achieve Grounded Language Understanding?
Santa Fe Institute via YouTube
Overview
Syllabus
Intro
Could a purely self-supervised Foundation Model achieve grounded language understanding?
Could a Machine Think? Classical Al is unlikely to yield conscious machines, systems that mimic the brain might
A quick summary of "Could a machine think?"
Foundation Models (FMs)
Self-supervision
Two paths to world-class Al chess?
Conceptions of semantics
Bender & Koller 2020: Symbol streams lack crucial information
Multi-modal streams
Metaphysics and epistemology of understanding
Behavioral testing: Tricky with Foundation Models
Internalism at work: Causal abstraction analysis
Findings of causal abstraction in large networks
Taught by
Santa Fe Institute