This minisymposium targets research on adaptive and efficient sampling methods for heterogeneous problems that do not depend smoothly on random (model) parameters. Uncertainty quantification for such problems, especially those that exhibit discontinuities, are notoriously challenging to solve efficiently with existing methods. Due to the lack of regularity usually only sampling based methods remain a robust alternative. However, these methods may converge slowly, unless combined with suitable accelerating techniques such as variance reduction techniques. Even when this is done, such a method’s performance may still be reduced compared to when applied to smooth problems, as demonstrated for instance in the context of multi-level Monte Carlo methods for non-smooth functions [Schwab & Mishra, Krumscheid et al]. The challenges of heterogeneous problems have been demonstrated repeatedly for different classes of methods, including localized generalized polynomial chaos using wavelets [LeMaitre and
Knio] or multi-elements [Wan & Karniadakis]. Common to these methods is that they require a problem dependent adaptation of the sampling procedures in the vicinity of heterogeneous features. Here, adaptivity is to be understood in a wide sense, ranging from machine learning approaches for identifying parameterizations or response functions to discontinuity tracking. In this minisymposium we will discuss techniques that combine such adaptivity with efficient sampling algorithms.
16:30
An adaptive minimum spanning tree method for UQ of discontinuous responses
Benjamin Sanderse | Centrum Wiskunde & Informatica | Netherlands
Show details
Author:
Benjamin Sanderse | Centrum Wiskunde & Informatica | Netherlands
A novel approach for non-intrusive uncertainty propagation is proposed. Our approach overcomes the limitation of many traditional methods, such as generalised polynomial chaos methods, which may lack sufficient accuracy when the quantity of interest depends discontinuously on the input parameters. As a remedy we propose an adaptive sampling algorithm based on minimum spanning trees combined with a domain decomposition method based on support vector machines. The minimum spanning tree determines new sample locations based on both the probability density of the input parameters and the gradient in the quantity of interest. The support vector machine efficiently decomposes the random space in multiple elements, avoiding the appearance of Gibbs phenomena near discontinuities. On each element, local approximations are constructed by means of least orthogonal interpolation, in order to produce stable interpolation on the unstructured sample set. The resulting minimum spanning tree multi-element method does not require initial knowledge of the behaviour of the quantity of interest and automatically detects whether discontinuities are present. We present several numerical examples that demonstrate accuracy, efficiency and generality of the method.
17:00
A KDE-based Multi-level Markov chain Monte Carlo algorithm
Juan Pablo Madrigal Cianci | EPFL | Switzerland
Show details
Authors:
Juan Pablo Madrigal Cianci | EPFL | Switzerland
Fabio Nobile | EPFL | Switzerland
Anamika Pandey | RWTH | Germany
Raul F. Tempone | RWTH | Germany
Panagiotis Tsilifis | EPFL | Switzerland
We present a multi-level Markov Chain Monte Carlo (ML-MCMC) sampling strategy for PDE-constrained Bayesian inverse problems. This multi-level strategy introduces a hierarchy of discretization levels of the underlying PDE, with increasing accuracy and computational cost, in such a way that to each level there corresponds an associated posterior distribution. At each level, the algorithm simultaneously generates two MCMC chains targeting posterior distributions at two contiguous discretization levels, so that the standard multi-level Monte Carlo telescoping sum argument can be used. Central to the algorithm presented in this talk is the choice of proposal distribution used to generate the multi-level chains. We construct such a proposal using a weighted kernel density estimator (KDE) built from the collected samples from all previous discretization levels in such a way that more weight is given to those samples obtained at finer discretization levels. We discuss the construction of such KDE-based proposals, analyze the theoretical cost of the presented method, and discuss the conceptual advantages of this approach over other ML-MCMC algorithms. Experimental results suggest that the benefits provided by this algorithm are two-fold; on the one hand, the majority of the computations are performed at a coarse discretization level, at a cheaper cost. On the other hand, samples generated at higher discretization levels tend to have a small integrated autocorrelation time.
17:30
Model Misspecification And Uncertainty Quantification For Drift Estimation In Multiscale Diffusion Processes
Giacomo Garegnani | EPFL | Switzerland
Show details
Authors:
Assyr Abdulle | EPFL | Switzerland
Giacomo Garegnani | EPFL | Switzerland
Grigorios A. Pavliotis | Imperial College London | United Kingdom
Andrew M. Stuart | Caltech | United States
We present a novel technique for estimating the drift function of a diffusion process possessing two separated time scales. Our aim is fitting a homogenized diffusion model to a continuous sample path coming from the full multiscale process, thus dealing with an issue of model misspecification. We consider a Bayesian framework and study the asymptotic limit of posterior distributions over the drift function. In this setting, we show on the one hand that if the continuous multiscale data are not pre-processed, then the posterior distribution concentrates asymptotically on the wrong value of the drift function. On the other hand, we show that data can be treated ahead of the inference procedure in order to obtain the desired posterior. In particular, we prove that there exists a family of transformations which are linear on the space of continuous sample paths and which, when applied to multiscale data, allow the posterior distribution to be asymptotically correct. We present a series of numerical examples on test cases which corroborate our theoretical findings.
18:00
Adaptive Stratified Sampling for Non-smooth Problems
Mass Per Pettersson | NORCE Norwegian Research Centre AS | Norway
Show details
Authors:
Mass Per Pettersson | NORCE Norwegian Research Centre AS | Norway
Sebastian Krumscheid | RWTH Aachen | Germany
Sampling based variance reduction techniques, such as Multi-Level Monte Carlo methods, have been established as a general-purpose procedure for quantifying uncertainties in computational models. It is known however, that these techniques do not provide any performance gain when there is a non-smooth parameter dependence. Moreover, in many applications (e.g. fractured porous media of relevance to carbon storage and wastewater injection) the key idea of MLMC cannot be fully exploited since no hierarchy of computational models can be constructed. An alternative means to obtain variance reduction in these cases is offered by stratified sampling methods.
We introduce two novel stratified sampling methods. The first method uses ideas from adaptive mesh refinement, applied to the stochastic instead of the physical model. The stochastic domain is adaptively stratified using local sensitivity estimates, and the samples are sequentially allocated to the strata for asymptotically optimal distribution.
Theoretically optimal stratification is given by strata defined by the level curves of the function to be approximated, which is unknown in realistic problems. The second method performs adaptive stratification based on successively more accurate contour level functions. The proposed methodology is demonstrated on geomechanics in fractured reservoirs, and computational speed-up compared to standard Monte Carlo is obtained.