Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the "data gap" between Large Language Models (LLMs) and human children in this 48-minute talk by Michael Frank from Stanford University. Delve into the reasons behind LLMs requiring 3-5 orders of magnitude more training data than children, examining perspectives such as innate knowledge, active and social learning, multimodal information, and evaluation differences. Gain insights into new data on multimodal input richness and the consequences of evaluation disparities. Investigate how the cognitive science concept of competence/performance distinctions applies to LLMs, enhancing your understanding of higher-level intelligence from AI, psychology, and neuroscience perspectives.