Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Santa Fe Institute

Cognitive Maps in Large Language Models - Multiscale Predictive Representations

Santa Fe Institute via YouTube

Overview

Explore a conference talk examining cognitive maps in large language models, focusing on multiscale predictive representations in hippocampal and prefrontal hierarchies. Delve into the comparison between GPT-4 32K and GPT-3.5 Turbo under various temperature settings, analyzing their performance in graph navigation and shortest path problems. Investigate the potential of chain of thought prompts to enhance LLMs' cognitive map capabilities. Consider the implications of errors and response latencies in understanding AI systems, while acknowledging the fundamental differences between LLMs and human cognition.

Syllabus

Intro
Multiscale Predictive Cognitive Maps In Hippocampal & Prefrontal hierarchies
Cognitive Maps Learned representations of relational structures For goal-directed multistep planning & inference
Conditions
GPT-4 32K vs. GPT-3.5 Turbo Temperature 0, .5, 1
GPT-4 32K is comfortable with deeper trees
GPT-4 fails shortest path in graphs with dense community structure & sometimes hallucinates edges
Can chain of thought prompts (COT) Improve LLMs' cog map performance?
In Cog & neuro-sciences Errors & response latencies are windows into minds & brains & AI/LLMs?
LLMs are not comparable to one person Specific latent states in response to prompt may appear so But they don't qualify for mental life

Taught by

Santa Fe Institute

Reviews

Start your review of Cognitive Maps in Large Language Models - Multiscale Predictive Representations

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.