Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Towards Third Wave AI: Interpretable, Robust Trustworthy Machine Learning

Inside Livermore Lab via YouTube

Overview

Explore cutting-edge advancements in artificial intelligence through this 49-minute talk on interpretable, robust, and trustworthy machine learning. Delve into new theories and scalable numerical algorithms for complex dynamical systems, aimed at developing more secure and reliable AI technologies for real-time prediction, surveillance, and defense applications. Learn about novel neural networks that can learn functionals and nonlinear operators with simultaneous uncertainty estimates, and discover multi-fidelity, federated, Bayesian neural operator network architectures in scientific machine learning. Examine the integration of physics knowledge with AI to create interpretable models for science and engineering, illustrated through two data-science case studies: predicting the COVID-19 pandemic with uncertainties and data-driven causal model discovery for personalized prediction in Alzheimer's disease. Presented by Professor Guang Lin from Purdue University, this talk offers valuable insights into the future of AI and its applications in enhancing national security and improving complex dynamical systems.

Syllabus

DDPS | Towards Third Wave AI: Interpretable, Robust Trustworthy Machine Learning

Taught by

Inside Livermore Lab

Reviews

Start your review of Towards Third Wave AI: Interpretable, Robust Trustworthy Machine Learning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.