Could a Purely Self-Supervised Foundation Model Achieve Grounded Language Understanding?

Could a Purely Self-Supervised Foundation Model Achieve Grounded Language Understanding?

Santa Fe Institute via YouTube Direct link

Could a Machine Think? Classical Al is unlikely to yield conscious machines, systems that mimic the brain might

3 of 14

3 of 14

Could a Machine Think? Classical Al is unlikely to yield conscious machines, systems that mimic the brain might

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Could a Purely Self-Supervised Foundation Model Achieve Grounded Language Understanding?

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Could a purely self-supervised Foundation Model achieve grounded language understanding?
  3. 3 Could a Machine Think? Classical Al is unlikely to yield conscious machines, systems that mimic the brain might
  4. 4 A quick summary of "Could a machine think?"
  5. 5 Foundation Models (FMs)
  6. 6 Self-supervision
  7. 7 Two paths to world-class Al chess?
  8. 8 Conceptions of semantics
  9. 9 Bender & Koller 2020: Symbol streams lack crucial information
  10. 10 Multi-modal streams
  11. 11 Metaphysics and epistemology of understanding
  12. 12 Behavioral testing: Tricky with Foundation Models
  13. 13 Internalism at work: Causal abstraction analysis
  14. 14 Findings of causal abstraction in large networks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.