Towards Third Wave AI: Interpretable, Robust Trustworthy Machine Learning
Inside Livermore Lab via YouTube
Overview
Explore cutting-edge advancements in artificial intelligence through this 49-minute talk on interpretable, robust, and trustworthy machine learning. Delve into new theories and scalable numerical algorithms for complex dynamical systems, aimed at developing more secure and reliable AI technologies for real-time prediction, surveillance, and defense applications. Learn about novel neural networks that can learn functionals and nonlinear operators with simultaneous uncertainty estimates, and discover multi-fidelity, federated, Bayesian neural operator network architectures in scientific machine learning. Examine the integration of physics knowledge with AI to create interpretable models for science and engineering, illustrated through two data-science case studies: predicting the COVID-19 pandemic with uncertainties and data-driven causal model discovery for personalized prediction in Alzheimer's disease. Presented by Professor Guang Lin from Purdue University, this talk offers valuable insights into the future of AI and its applications in enhancing national security and improving complex dynamical systems.
Syllabus
DDPS | Towards Third Wave AI: Interpretable, Robust Trustworthy Machine Learning
Taught by
Inside Livermore Lab