Explore a comprehensive lecture on polynomial approximation of random partial differential equations (PDEs) using discrete least squares with observations in random points. Delve into the general problem F(u,y)=0, where u represents the unknown solution and y denotes a set of uncertain parameters. Examine cases where the parameter-to-solution map u(y) is smooth, but y can be high-dimensional or even infinite-dimensional. Focus on scenarios where F is a partial differential operator, u is a Hilbert space-valued function, and y is a distributed, space and/or time-varying random field. Learn about reconstructing the parameter-to-solution map u(y) from noise-free or noisy observations in random points using discrete least squares on polynomial spaces associated with downward closed index sets. Understand the relevance of the noise-free case in constructing metamodels for computer experiment outputs, particularly in PDEs with random parameters. Investigate the stability of discrete least squares on random points and explore error bounds in expectation and probability for a priori chosen index sets. Discover theoretical bounds on the minimal error achievable when optimally choosing the index set for a given sample among all possible downward closed index sets of given cardinality. Gain insights into adaptive-type algorithms aiming to discover the optimal polynomial space.
Polynomial Approximation of Random PDEs by Discrete Least Squares with Observations
Hausdorff Center for Mathematics via YouTube
Overview
Syllabus
Fabio Nobile: Polynomial Approximation of Random PDEs by discrete least squares with observations in
Taught by
Hausdorff Center for Mathematics