The synthesis of various information sources, including a priori domain knowledge, statistical assumptions, field data, etc., large-scale numerical models is one of the key steps in building interpretable and predictive models for supporting critical decisions in science, engineering, medicine, and beyond. Typical examples can be found in oil/gas reservoir modeling, treatment of saltwater intrusion, medical imaging, tumor treatment, aircraft design. Because of the computationally costly nature of the numerical models and stringent requirements on the accuracy of the statistical learning outcomes, multilevel and multi-fidelity methods provide a viable route for solving these model-based statistical learning tasks. This mini-symposium will bring together researchers working on the forefront of multilevel and multi-fidelity methods (and other relevant methods) intended to accelerate model-based statistical learning tasks.
16:30
Multilevel quasi-Monte Carlo methods for random elliptic eigenvalue problems
Alexander Gilbert | Ruprecht-Karls-Universität Heidelberg | Germany
Show details
Authors:
Alexander Gilbert | Ruprecht-Karls-Universität Heidelberg | Germany
Robert Scheichl | Heidelberg University | Germany
We study an elliptic eigenvalue problem with coefficients that depend on infinitely many stochastic parameters. The stochasticity in the coefficients causes the eigenvalues and eigenfunctions to also be stochastic, and so our goal is to compute the expectation of the minimal eigenvalue. In practice, to approximate this expectation one must:
1) truncate the stochastic dimension;
2) discretise the eigenvalue problem in space (e.g., by finite elements); and
3) apply a quadrature rule to estimate the expected value.
We will present a multilevel quasi-Monte Carlo method for approximating the expectation of the minimal eigenvalue, which is based on a hierarchy of finite element meshes and truncation dimensions. To improve the sampling efficiency over Monte Carlo we will use a quasi-Monte Carlo rule to generate the sampling points, which are deterministic (or quasi-random) quadrature rules that are well-suited to high-dimensional integration and can converge at a rate faster than Monte Carlo.
Often the distribution of the stochastic parameters is unknown, and one wants to incorporate knowledge given by data to infer details about these parameters. After detailing how to solve the forward problem, we will use a Bayesian framework to solve the inverse problem, which uses measurements of the eigenvalue to compute a posterior expectation.
17:00
Multifidelity machine learning by the sparse grid combination technique
Peter Zaspel | Jacobs University | Germany
Show details
Authors:
Peter Zaspel | Jacobs University | Germany
Michael Griebel | University of Bonn | Germany
Helmut Harbrecht | University of Basel | Switzerland
Bing Huang | University of Basel | Swaziland
O Anatole von Lilienfeld | University of Basel | Switzerland
The solution of parametric partial differential equations or other parametric problems is the main component of e.g. uncertainty quantification and inverse problems. Snapshot-based non-intrusive techniques for the solution of parametric problems avoid the re-implementation of the corresponding solvers.
We report on ongoing work to solve parametric problems with a higher-dimensional parameter space by means of approximation in reproducing kernel Hilbert spaces. In presence of regularization, this approach is equivalent to "kernel ridge regression" (KRR), a machine learning approach. Hence, results on the use of machine learning for approximation of parametric problems will be discussed.
In case of missing high regularity or even unknown regularity, the curse of dimensionality enforces a high number of often very expensive simulation snapshots that have to be computed to get a low approximation error with respect to the parameter space. This is computationally intractable. We have introduced a multi-fidelity KRR approach based on the sparse grid combination technique / multi-index approximation. This approach significantly reduces the required number of expensive simulations by adding coarser and coarser simulation snapshots.
17:30
Multi-Level Optimization Based Monte-Carlo Samplers for Large-Scale Inverse Problems
Chuntao Chen | Monash University | Australia
Show details
Authors:
Chuntao Chen | Monash University | Australia
Tiangang Cui | Monash University | Australia
Youssef Marzouk | Massachusetts Institute of Technology | United States
Zheng Wang | Massachusetts Institute of Technology | United States
The Markov Chain Monte Carlo (MCMC) method is one of the pillars of Bayesian inverse problems. However, this approach typically faces several challenges in large-scale inverse problems: classical MCMC algorithms rely on constructing a sequential Markov chain, which makes it hard to fully parallelise; it is often challenging to derive efficient transition kernels; and simulating the Markov chain can be computationally costly, as the posterior density evaluation involves expensive forward model solves. We present an integrated approach based on the multilevel Monte Carlo method and the optimisation-based samplers, e.g., implicit sampling and randomise-then-optimise, to address these challenges. The use of optimisation based samplers allows us to derive efficient and parallelisable MCMC or importance sampling estimators for solving inverse problem. With the help of the multilevel Monte Carlo, we can further accelerate RTO and reduce the variance of resulting estimators. We will demonstrate the efficacy of our approach on inverse problems governed by PDE and ODE.
18:00
Hierarchically Structured Transport Maps for Inference Problems
Michael Brennan | Massachusetts Institute of Technology | United States
Show details
Authors:
Michael Brennan | Massachusetts Institute of Technology | United States
Youssef Marzouk | Massachusetts Institute of Technology | United States
We will present a new methodology for building structured transport maps for Bayesian inference problems. Bayesian inference requires the evaluation of integrals with respect to some target distribution whose density is only known up to a normalizing constant. One approach that has recently gained popularity, as both an alternative and a complement to standard sampling strategies such as MCMC, is to form a deterministic mapping---a transport map---between the target density and a simple reference density that can be easily sampled, e.g., a standard Gaussian. We introduce a method for building transport maps with a multi-level structure based on a generalization of hierarchical matrices. First, we will discuss links between hierarchical matrices and the structure of linear transport maps coupling normal random variables. A central point is that hierarchical structure in a matrix imposes a domain decomposition-like structure on the transport map. We will then motivate nonlinear generalizations of the hierarchical structure used in our framework and formulate an algorithm for leveraging this structure to learn the transport map efficiently. We will demonstrate the performance of our algorithm on several high-dimensional inference problems.