This MS will explore recent advances and ongoing foundational (mathematical and statistical) work alongside with recent algorithmic advances in the use of Deep Learning (DL) algorithms in computational UQ.
Contributions address in particular:
* approximation rate estimates for DL algorithms applied to solution manifolds of high-dimensional, parametric PDEs,
* DL accelerated sampling algorithms for UQ in inverse problems,
* Invertible DL surrogates of high-dimensional probability densities,
* DL architectures suitable in UQ,
* physics-informed DL algorithms for learning UQ in parametric PDE models,
* expressivity of DL algorithms for PDE constrained optimization and Bayesian inversion.
14:30
Learning Deep Neural Networks via Hierarchical Low Rank Tensors
Lars Grasedyck | RWTH Aachen | Germany
Show details
Author:
Lars Grasedyck | RWTH Aachen | Germany
In this talk we will introduce hierarchical low rank tensor formats (Hierarchical Tucker, Tensor Trains, Tensor Network States) for the approximation and representation of multivariate functions. A full tensor or $d$-variate interpolation of a function on a Cartesian grid with $n$ grid points per axis is of complexity $O(n^d)$ and thus exponential in the dimension $d$, the so-called curse of dimension is visible. One can only avoid the curse by assuming further structural regularity on the underlying function or discrete tensor. We want to assume low rank and explain why this is a reasonable assumption e.g. for solutions of high-dimensional elliptic operators.
In the second part of the talk we consider the tensorization of deep neural network parameters that can be learned via approximations and minimization of hierarchical low rank tensors, for which we have quasi-optimal algorithms. Low rank representability will be considered for a simple model case and analysed numerically for more general cases.
Our aim is to learn initial guesses for a subsequent optimization in the deep neural network, e.g. by stochastic gradient iterations.
15:00
Posterior density approximation for Gaussian priors
Jakob Zech | Massachusetts Institute of Technology | United States
Show details
Authors:
Dinh Dung | Vietnam National University | Viet Nam
Van Kien Nguyen | University of Bonn | Germany
Christoph Schwab | ETH Zürich | Switzerland
Jakob Zech | Massachusetts Institute of Technology | United States
We consider the approximation of posterior densities for Bayesian inverse problems with Gaussian priors in Uncertainty Quantification. Sparsity results for the posterior are provided in case the forward map is given as the solution to certain well-posed parametric PDEs. These results in turn yield expression rate bounds for the approximation by deep neural networks.
15:30
A multi-level procedure for enhancing accuracy of machine learning algorithms
Roberto Molinaro | ETH Zürich | Switzerland
Show details
Authors:
Kjetil Lye | ETH Zürich | Switzerland
Siddhartha Mishra | ETH Zürich | Switzerland
Roberto Molinaro | ETH Zürich | Switzerland
We propose a multi-level method to increase the accuracy of machine learning algorithms for approximating observables in scientific computing, particularly those that arise in systems modeled by differential equations. The algorithm relies on judiciously combining a large number of computationally cheap training data on coarse resolutions with a few expensive training samples on fine grid resolutions. Theoretical arguments for lowering the generalization error, based on reducing the variance of the underlying maps, are provided and numerical evidence, indicating significant gains over underlying single-level machine learning algorithms, are presented. Moreover, we also apply the multi-level algorithm in the context of forward uncertainty quantification and observe a considerable speed-up over competing algorithms.