Cognitive Maps in Large Language Models - Multiscale Predictive Representations

Cognitive Maps in Large Language Models - Multiscale Predictive Representations

Santa Fe Institute via YouTube Direct link

GPT-4 32K vs. GPT-3.5 Turbo Temperature 0, .5, 1

5 of 10

5 of 10

GPT-4 32K vs. GPT-3.5 Turbo Temperature 0, .5, 1

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Cognitive Maps in Large Language Models - Multiscale Predictive Representations

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Multiscale Predictive Cognitive Maps In Hippocampal & Prefrontal hierarchies
  3. 3 Cognitive Maps Learned representations of relational structures For goal-directed multistep planning & inference
  4. 4 Conditions
  5. 5 GPT-4 32K vs. GPT-3.5 Turbo Temperature 0, .5, 1
  6. 6 GPT-4 32K is comfortable with deeper trees
  7. 7 GPT-4 fails shortest path in graphs with dense community structure & sometimes hallucinates edges
  8. 8 Can chain of thought prompts (COT) Improve LLMs' cog map performance?
  9. 9 In Cog & neuro-sciences Errors & response latencies are windows into minds & brains & AI/LLMs?
  10. 10 LLMs are not comparable to one person Specific latent states in response to prompt may appear so But they don't qualify for mental life

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.