[ Moved from MW HS 2235 ]
The analysis and comparison of dynamic objects and deforming shapes is important in many real-world applications. Examples include wildfire front-tracking problems, impulse propagation in cardiac tissues, tumor growth, oil reservoir and spill simulations, and pollutant plume dispersion, just to name a few. There are several difficulties that can make the analysis a daunting task and hence need to be addressed: 1) the problem is subjected to uncertainty in the location of structures due to numerical errors, measurement noise, and/or intrinsic variations in the system; 2) strong shape deformations and topological changes may not be well captured at all scales; and 3) the notion of distance or similarity between objects can be characterized in various ways.
This situation has fostered a recent body of work focused on both analytical and computational developments in metric spaces. As an example, the Wasserstein metric has become an increasingly popular tool in such diverse fields as image processing, optimization, neural networks, seismic imaging, and numerical conservation laws. It opens up promising avenues for uncertainty quantification, Bayesian inference and data assimilation, where robust comparisons and mappings between different probability measures are often needed.
This MS will review recent advances, applications and remaining challenges of tailored metric spaces and similarity measures for structure-sensitive uncertainty quantification and inference problems.
14:00
Wasserstein metric-driven Bayesian Inversion with applications to Wave Propagation Problems
Mohammad Motamed | The University of New Mexico | United States
Show details
Authors:
Mohammad Motamed | The University of New Mexico | United States
Daniel Appelo | University of Colorado Boulder | United States
We present a Bayesian framework based on a new exponential likelihood function driven by the quadratic Wasserstein metric. Compared to conventional Bayesian models, e.g. based on Gaussian likelihood functions driven by the L2 norm, the new framework features several advantages. First, the new framework does not rely on the likelihood of the measurement noise and hence can treat complicated noise structures. Second, unlike the normal likelihood function, the Wasserstein-based exponential likelihood function does not usually generate multiple local extrema. As a result, the new framework features better convergence to correct posteriors when a Markov Chain Monte Carlo sampling algorithm is employed. Third, in the particular case of wave propagation problems, while a normal likelihood function measures only the amplitude differences between the observed and simulated waveforms, the new likelihood function can capture both amplitude and phase differences. We apply the new framework to the inverse uncertainty quantification of waveforms and demonstrate and discuss the aforementioned advantages.
14:30
Optimal transport-based for variational data assimilation
Arthur Vidard | Laboratoire Jean Kuntzmann | France
Show details
Authors:
Arthur Vidard | Laboratoire Jean Kuntzmann | France
Nelson Feyeux | Laboratoire Jean Kuntzmann | France
Long Li | Laboratoire Jean Kuntzmann | France
Maelle Nodet | Laboratoire de Mathématiques de Versailles | France
Jianwei Ma | Harbin Univ. | China
Francois-Xavier Le Dimet | Laboratoire Jean Kuntzmann | France
Data assimilation requires to compare model output with actual observations. Typically this is done using a L2 norm of point-wise differences. With the advent of structured observation (e.g. photographs or sequence of photographs) this choice may be suboptimal since it does not capture well position errors. An alternative is to consider optimal transport-based distances. Using such distances in the framework of variational data assimilation is not straightforward. Gradient calculus and related descent optimisation need to be adapted. Moreover for realistic applications the high cost of these distances implies to consider proper approximations. This will be illustrated on academic 1D and 2D examples.
15:00
Optimal Decision-Making and Uncertainty Quantification — Beyond IID Data
Tobias Sutter | EPFL | Switzerland
Show details
Authors:
Tobias Sutter | EPFL | Switzerland
Bart P.G. Van Parys | MIT | United States
Daniel Kuhn | EPFL | Switzerland
Decision problems under uncertainty appear in numerous applications and
have been extensively studied in various disciplines such as Statistics,
Operations Research, Economics, and Engineering. A particularly
important instance of this problem class are the uncertainty
quantification problems, where the goal is to ascertain whether an
uncertain state satisfies a number of safety constraints. Often the
uncertainty is modelled via a probability distribution (known as the
model) that is unknown to the decision maker who observes only a finite
number of samples drawn from that distribution.
Making decisions and uncertainty quantifications based directly on data
instead of models requires however some care. Decision rules and
predictions calibrated to one particular set of data need indeed not
perform well on new (unseen) test data, which is referred to as the
out-of-sample performance. Safeguarding such decisions and predictions
against over-calibration to one particular training data set is the
crucial tradeoff, which has been formalized based on the assumption of
the data being independent and identically distributed (i.i.d.).
In this talk, we will show, how to derive mathematical guarantees for
the problems of optimal decision-making and uncertainty quantification
based on data that is correlated. In particular, we will highlight how
to construct an optimal ambiguity set, given correlated data, that
quantifies the uncertainty involved.
15:30
Robust Kalman filtering of shape observations using shape metrics
Philippe Moireau | INRIA | France
Show details
Authors:
Philippe Moireau | INRIA | France
Annabelle Collin | INRIA | France
Didier Lucor | LIMSI - CNRS | France
Efficient simulation and data assimilation of dynamical systems monitored through fronts or deforming shapes induced by complex underlying nonlinear physics are important to many real-world applications. In this context, data assimilation filtering methods must iteratively correct several set of shapes based on some optimality criteria. Even if the notion of distance or similarity between objects is crucial in this case, it is not unique and can be characterized in various ways. An approach relying on the definition of a global metric structure on a set of shapes seems appropriate in this framework as it avoids the parametrization of the geometry and may extend quite naturally to higher dimensions. To this end, we consider smooth approximations of Hausdorff metric to measure shape dissimilarities. This allows an efficient warping of an elementary shape onto another by infinitesimal gradient descent, minimizing the corresponding dissimilarity distance measure. These convenient tools are then used to define empirical shapes statistics, improving the accuracy and robustness of the filtering. In this presentation, we will present results for some ensemble Kalman filtering data assimilation test cases with random shapes proposed from subsets of R^2.