Recent years have seen the flourishing of techniques devoted to best incorporate data in the models, either for the solution of inverse problems or for approximation purposes. This includes domain-aware Machine Learning techniques, dynamic mode decomposition or data driven model order reduction methods. This minisymposium aims to provide a venue for young researchers focusing on the theoretical analysis, the development and the application of these methodologies.
14:00
- NEW - A data free likelihood-informed subspace for dimensionality reduction of Bayesian inverse problems
Olivier Zahm | Inria | France
Show details
Authors:
Olivier Zahm | Inria | France
Daniele Bigoni | Massachusetts Institute of Technology | United States
Clémentine Prieur | Grenoble Alpes University | France
Youssef Marzouk | Massachusetts Institute of Technology | United States
A high dimensional Bayesian inverse problem has a low effective dimension when the data are informative only on a low-dimensional subspace. In this talk, we show how to use the Fisher information matrix to detect such a subspace before the data are observed. The proposed approach allows to control the approximation error (in expectation over the data) of the posterior distribution. We also present sampling strategies which exploit the informed subspace to draw efficiently samples from the posterior distribution.
14:30
A registration method for Model Order Reduction: data compression and geometry reduction
Tommaso Taddei | INRIA | France
Show details
Author:
Tommaso Taddei | INRIA | France
We propose a general — i.e., independent of the underlying equation — registration method for parameterized Model Order Reduction. The approach relies on a set of snapshots to determine a parameterized mapping that is bijective for all values of the parameter: the mapping is designed to recast the original problem in a form that is more amenable for linear compression methods. We apply the registration procedure, in combination with a linear compression method, to devise low-dimensional representations of solution manifolds with slowly-decaying Kolmogorov N-widths; we also consider the application to problems in parameterized geometries. We present a theoretical result to show the mathematical rigor of the registration procedure. We further present numerical results for several two-dimensional problems, to empirically demonstrate the effectivity of our proposal.
15:00
PLS-based dimension reduction for uncertainty quantification
Iason Papaioannou | Engineering Risk Analysis Group, Technische Universität München | Germany
Show details
Authors:
Iason Papaioannou | Engineering Risk Analysis Group, Technische Universität München | Germany
Max Ehre | Engineering Risk Analysis Group, Technische Universität München | Germany
Daniel Straub | Engineering Risk Analysis Group, Technische Universität München | Germany
Partial least squares (PLS) analysis is a statistical method for dimension reduction of the independent variables in the context of multivariate regression. Based on a set of data from the independent and response variables, PLS identifies latent components in the form of linear combinations of the original variables that maximize their covariance with the response variables. We discuss PLS in the context of surrogate model construction for uncertainty quantification. In particular, we combine PLS with polynomial chaos expansions (PCE) to build surrogates in very high-dimensional input spaces. The derived surrogate model inherits the simple post-processing capabilities of standard PCEs, hence enabling evaluation of statistical moments and variance-based sensitivity indices through algebraic manipulation of the expansion coefficients. We demonstrate the performance of the method through numerical examples in high-dimensional input spaces.
15:30
Kernel-based surrogate models for UQ
Gabriele Santin | Center for Information and Communication Technology, Fondazione Bruno Kessler | Italy
Show details
Author:
Gabriele Santin | Center for Information and Communication Technology, Fondazione Bruno Kessler | Italy
In this presentation we discuss the application of kernel-based approximation methods to obtain efficient surrogates to be employed in Uncertainty Quantification tasks.
These methods require only the knowledge of arbitrary input-output samples of the full model at possibly scattered locations, in possible high dimensional input and output spaces.
We discuss both techniques based on the approximation of the full model when understood as a target, unknown function, where the approximant can then be employed to accelerate a Monte Carlo simulation, and data-based quadrature methods, which instead directly aim at the approximation of the desired integrals.
In particular, we present recent results on greedy methods which try to minimize the number of queries of the expensive full model, and discuss some convergence results.