This MS will explore recent advances and ongoing foundational (mathematical and statistical) work alongside with recent algorithmic advances in the use of Deep Learning (DL) algorithms in computational UQ.
Contributions address in particular:
* approximation rate estimates for DL algorithms applied to solution manifolds of high-dimensional, parametric PDEs,
* DL accelerated sampling algorithms for UQ in inverse problems,
* Invertible DL surrogates of high-dimensional probability densities,
* DL architectures suitable in UQ,
* physics-informed DL algorithms for learning UQ in parametric PDE models,
* expressivity of DL algorithms for PDE constrained optimization and Bayesian inversion.
10:30
Expressive power of neural networks in Bayesian UQ
Lukas Herrmann | ETH Zürich | Switzerland
Show details
Authors:
Lukas Herrmann | ETH Zürich | Switzerland
Christoph Schwab | ETH Zürich | Switzerland
Jakob Zech | Massachusetts Institute of Technology | United States
For Bayesian inverse problems with data-to-response maps given by well-posed PDEs and subject to uncertain parametric or function space input data, under certain assumptions, we establish expression rate bounds by deep ReLU neural networks of the map relating observation data to the Bayesian estimate for the quantity of interest.
11:00
Deep neural network expression rate analysis for forward and Bayesian inverse PDE UQ
Joost Opschoor | ETH Zürich | Switzerland
Show details
Authors:
Joost Opschoor | ETH Zürich | Switzerland
Christoph Schwab | ETH Zürich | Switzerland
Jakob Zech | Massachusetts Institute of Technology | United States
For well-posed operator equations subject to distributed uncertain input data from function spaces, we analyze the expression rates by deep neural networks of several maps which arise in optimization and in uncertainty quantification (UQ). Adopting suitable (Riesz- or Schauder) bases, representation of the uncertain input data converts to a countably-parametric, deterministic parametric optimization problem. Examples comprise elliptic and parabolic PDEs with uncertain coefficients and in uncertain domains. For forward PDE constrained UQ, by bounding the DNN approximation error we show that deep ReLU networks express parametric solution manifolds without the curse of dimensionality. For Bayesian inverse UQ, we show exponential expressivity of deep ReLU networks for the data-to-QoI map, and for the Bayesian posterior density.
11:30
A theoretical analysis of deep neural networks and parametric PDEs
Reinhold Schneider | TU Berlin | Germany
Show details
Authors:
Gitta Kutyniok | TU Berlin | Germany
Philipp Petersen | University of Oxford | United Kingdom
Mones Raslan | TU Berlin | Germany
Reinhold Schneider | TU Berlin | Germany
We derive upper bounds on the complexity of ReLU neural networks approximating the solution maps of parametric partial differential equations. In particular, without any knowledge of its concrete shape, we use the inherent low-dimensionality of the solution manifold to obtain approximation rates which are significantly superior to those provided by classical approximation results. We use this low dimensionality to guarantee the existence of a reduced basis. Then, for a large variety of parametric partial differential equations, we construct neural networks that yield approximations of the parametric maps not suffering from a curse of dimension and essentially only depending on the size of the reduced basis.
12:00
A deep surrogate approach to efficient Bayesian inversion in PDE and integral equation models
Teo Deveney | University of Bath | United Kingdom
Show details
Author:
Teo Deveney | University of Bath | United Kingdom
This talk will discuss a deep learning approach to efficiently perform Bayesian inference in PDE and integral equation models over potentially high dimensional parameter spaces. We will describe a deep learning approach to approximating the solutions of Fredholm and Volterra integral equations of the first and second kind. We will then make use of this to describe a deep surrogate approach to efficiently sample from Bayesian posterior distributions with in likelihood functions that depend on the solutions of PDEs or integral equations. This approach allows for accurate surrogates to be approximated in significantly higher dimensions than is possible using classical techniques, making Bayesian inference over large parameter spaces tractable. Numerical examples using real world problems will be given to assess the accuracy of the integral equation solver, and demonstrate the effectiveness of this approach for Bayesian inverse problems.