Rational Ignorance - Optimal Learning from Complex Mechanistic Models
Santa Fe Institute via YouTube
Overview
Explore the concept of rational ignorance in complex mechanistic models through this lecture by Benjamin Machta from Yale University. Delve into the challenges of overfitting in statistical models with numerous parameters and examine why commonly used uninformative priors like Jeffreys prior fail to solve this problem. Discover a novel approach that maximizes expected information to create an optimal prior, avoiding bias introduced by irrelevant parameters. Investigate how this method adapts to model dimensionality and compare it to traditional approaches. Learn about topics such as Hessian matrices, Fisher information, sloppy models, Bayesian inference, and mutual information. Gain insights into the implications of this research for fields like climate modeling and predictive modeling in general.
Syllabus
Intro
Water
Why simpler models
Goal of climate modeling
Central claims
Outline
Hessian
Fischer information
Sloppy models
Metric space
Why sloppiness
Predictive models vs curve fitting
Bayesian inference
Mutual information
Adapting dimensionality
Jeffreys prior in dimension 4
Jeffreys prior in dimension 5
Jeffreys prior in dimension 26
Taught by
Santa Fe Institute