Important note: ALL poster presentations START at the SAME TIME, at 16:15 on Tuesday afternoon.
Unfortunately, in the "Persons" view of the online conference planner there is a bug, indicating different starting times for each poster. Converia is working to fix the bug, please ignore it in the meantime.
P01: The Multilevel Local Ensemble Transport Kalman Filter
Andrey A Popov | Virginia Tech | United States
Utilizing control variates in a nested manner, otherwise known as multileveling, in the context of data assimilation is a rapidly emerging topic. Making use of different coarseness levels of existing models in order to increase the convergence of at-scale data assimilation codes is a very attractive task. The Local Ensemble Transport Kalman Filter (LETKF) is the state-of-the-art ensemble Kalman filter variant that is run on large numerical weather prediction (NWP) codes. We present a formulation of the Multi-level Local Ensemble Transport Kalman Filter to join these two concepts. We apply the filter to a Quasi-Geostrophic model with highly nonlinear wind-magnitude observations, and show that in certain modes of operation, the filter is much more robust than the vanilla LETKF.
P02: A Nonlinear Filtering with Lévy Jumps
Kistosil Fahim | Montanuniversität Leoben | Austria
Dynamical systems arise in engineering, physical sciences as well as in social sciences. If the state of a system is known, one also knows its properties, and may, e.g., stabilize the system and prevent it from blowing up, or predict its near future. However, the state of a system consists often on internal parameters which are not always accessible. Instead, often only an observation process, which is a transformation of the current state, is accessible. The problem of nonlinear filtering is, to estimate the state of the system at a time t>0 through the data of the observation until time t.
Usually, one considers models where the state process and the observation process are perturbed by Gaussian noise. When these perturbations are known to exhibit extreme behaviour, as seen frequently in finance or environmental studies, a model relying on the Gaussian distribution is not appropriate. A suitable alternative could be a model based on a heavy-tailed distribution, as the stable distribution.
We are interested in nonlinear filtering where the signal and observation process is corrupted by a compound Poisson process. To catch up the jumps, we use methods from control engineering and construct a so-called Luneberger observer. These methods are combined with particle filters to construct an estimator of the state process, respective, an estimator of the density process. We apply this method to a mathematical pendulum and a single-link flexible joint robot.
P03: Variational Data Assimilation for Incompressible RANS Closure Models
Oliver Brenner | Institute of Fluid Dynamics, ETH Zurich | Switzerland
Due to low computational cost, flow simulations based on the Reynolds-Averaged Navier-Stokes (RANS) equations are still popular in industrial applications. However, they rely on closures for the Reynolds stresses which are often based on parametric models and are calibrated e.g. based on theoretical considerations. Recently, several approaches have been proposed that increase the predictive capabilities of RANS simulations by data driven turbulence models. In this work, we conduct variational data assimilation based on the discrete adjoint method to incorporate sparse measurement data into incompressible forward simulations of specific set-ups. The goal is to tune a parameter field that modifies the turbulent viscosity obtained from a proven closure model, such that simulation results better match available reference data. Cost function gradients are computed with the discrete adjoint method and then used in a gradient based optimization procedure. The number of parameters to be optimized equals the mesh resolution, thus it is potentially very large. However, the cost of evaluating the gradient with the adjoint method is almost independent of this number. A solver implemented in OpenFOAM is used to compute the solution of the forward problem and to obtain the adjoint gradient. The former is solved in a segregated manner using the SIMPLE algorithm. We demonstrate our method by applying it to stationary problems using the k-epsilon model and various reference data sets.
- CANCELED - P04: Online EM-based parameter estimation for sequential Monte Carlo filtering in data assimilation
Tadeo Javier Cocucci | Universidad Nacional de Córdoba - FaMAF | Argentina
Data assimilation (DA) systems estimate a time evolving hidden state and its uncertainty combining the information provided by partial observations of the system and the evolution of the model representing the state. Both the model and the observational process are subject to errors and the performance of the DA techniques depends on how well we can quantify these uncertainties. Indeed, a bad specification of these in a sequential Monte Carlo filter may lead to its degeneracy. We propose an approximated online EM-based algorithm that can be coupled with particle filters and the ensemble Kalman filter to estimate model and observational error covariances. The algorithm works sequentially, in conjunction with the filter, processing each observation only once. The parameters are updated assuming an exponential averaging window and assuming the model and observational error distributions belong to the exponential family. The method is evaluated in experiments with chaotic dynamical systems including the 40-variable Lorenz-96 system in which 1600 parameters from the full model error covariance are estimated with good accuracy. Imperfect model experiments are also conducted under structural model error, the online expectation-maximization (EM) also shows good performance in this scenario. The method is computationally efficient compared with the batch EM for which several forward-filtering backward-smoothing iterations are required, while obtaining comparable results in estimation performance.
P05: Monitoring volcanic lakes using the Unscented Kalman Smoother
Yannik Behr | GNS Science | New Zealand
Volcanic lakes on active volcanoes act like a filter, integrating the heat and gas input from the vents entering the lake. Monitoring changes in temperature, water mass, and ion concentration in these lakes can thus serve as proxies for changes in gas- and steam-release by the hydrothermal system and the melt zone beneath the volcano.
Calculations of the heat and steam inputs are typically done through a mass and energy balance (MEB) calculation: the sum of mass and energy flow into and out of the lake has to account for the observed changes in lake temperature, water level and ion dilution. Major challenges in this calculation are the disparate types and irregular temporal scales of observations and the large uncertainties both in the numerical approximations and some of the measurements.
Our solution is to formulate the MEB as a state-space model and then use the Unscented Kalman Smoother to estimate the heat and steam input into the lake. This approach provides us both with consistent uncertainty estimates and the ability to forecast future states of the lake. Most importantly, it allows us to integrate the disparate and irregular data streams without any manual pre-processing; a key requirement for continuous volcano monitoring.
We show examples from synthetic tests and real data from Mt Ruapehu Crater Lake, a volcanic lake on top of an active volcano in New Zealand which goes through several heating cycles every year.
P06: A novel Knothe-Rosenblatt Stein variational transport method: applications in data assimilation
Joshua Chen | Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin | United States
We introduce a new formulation for transport in the family of Stein variational methods. Stein variational methods produce a sequence of Newton/Gradient Descent steps pushing forward measures from the prior to the posterior, where the pushforward maps live in RKHS's spanned by kernels supported on quadrature points. Our novel extension restricts the transport maps to be monotone and triangular (i.e. Knothe-Rosenblatt (KR) maps), obtaining unique diffeomorphic pushforward sequences. This allows for a more flexible framework for designing RKHS spaces for the sequence of transport maps. To enforce the monotonicity constraint, we apply line search/trust-region methods dimension-by-dimension. The motivations for this novel formulation are many. For example, existence and uniqueness of the KR map, easy to compute Jacobian determinants, KR maps closed with respect to inverse, access to certain marginals, and ease of conditional sampling. Particularly, we exploit such properties for the usage of KR maps in variational data assimilation. Numerical examples and comparisons to other KR map parameterizations will be provided.
P07: Adjusting Simon's two-stage design for uncertainty of the response rate under the null and heterogeneity using historical controls
Dominic Edelmann | German Cancer Research Center | Germany
The optimal two-stage design developed by Richard Simon is a popular concept for conducting phase II clinical trials. This design is optimal in the sense that it minimizes the expected sample size under the null hypothesis among all two-stage designs subject to constraints for the type 1 and type 2 error of the study.
In many cancer studies, Simon’s two-stage design is applied to compare the effect of a novel treatment with the current standard therapy. However, this approach poses two problems. First, the population under consideration is often highly heterogeneous. Notably, basket trials test the effect of a medication on different types of cancer associated with the same mutation and these different types often have vastly differing response rates. Second, since many subtypes of cancer are very rare, there is a substantial uncertainty concerning the effect of standard therapy for a single subtype.
In this talk, we first demonstrate the unsatisfactory performance of Simon's two stage design in the cases of heterogeneous population and uncertainty of the response rate under the null. We then present a modification of Simon's two stage design that corrects for these problems by appropriately incorporating uncertainty and adjusting the threshold for futility and rejection based on the subtypes of the recruited patients. The performance of this approach is demonstrated in various simulations.
P08: Three experimental setups for calibration of confined concrete material model
Anna Kucerova | Czech Technical University in Prague, Faculty of Civil Engineering | Czech Republic
This contribution is devoted to analysis and comparison of different experimental setups designed to calibrate the constitutive law of the confined concrete. The work is motivated by automated fabrication of precast columns with innovative multi-spiral reinforcement designed to sustain high loading and/or seismicity. In particular, we studied here three experimental setups with transverse confinement, two passive and one active. In all three cases, the parameters of the damage-plastic model proposed by Grassl and Jirásek are identified in the Bayesian statistical framework allowing for quantification of the information content in particular experiments.
- NEW - P09: Efficient Solvers for Stochastic Galerkin Finite Element Systems Arising in Linear Elasticity Problems
Ying Liu | The University of Manchester | United Kingdom
We consider the three-field model for linear elasticity with uncertain Young’s modulus that was introduced in . To perform forward UQ, we apply a stochastic Galerkin mixed finite element method which yields stable numerical approximations even in the incompressible limit. Since the stochastic Galerkin method is an intrusive approach, we require a bespoke solver. The associated discrete problems can be formulated in two ways: (i) as a large linear system with saddle-point structure (the so-called Kronecker formulation) and (ii) as a multi-term matrix equation. We discuss details of these formulations, and suitable iterative solution techniques. Since the systems are ill-conditioned not only with respect to the discretization parameters, but also physical model parameters such as the Poisson ratio, preconditioning is an essential component of any solution strategy. We present numerical results and discuss pros and cons of different preconditioning approaches.
 A. Khan, C.E. Powell, D. Silvester, Robust Preconditioning for Stochastic Galerkin Formulations of Parameter-Dependent Nearly Incompressible Elasticity Equations. SIAM J. Sci. Comput., Vol 41, No.1, pp.A402-A421.
P10: Hierarchical Off-diagonal Low-rank (HODLR) Approximation of Hessians for Inverse Problems with Application to Ice Sheet Flow
Tucker Hartland | University of California Merced | United States
Model-based projections of the dynamics of the polar ice sheets play a central role in estimating future sea level change. One challenge in improving model predictability is due to uncertain model parameters that must be inferred from observational data, leading to an inverse problem and the need to quantify uncertainties in its solution. Solving Bayesian inverse problems with expensive forward models (in our case, nonlinear Stokes equations) and high-dimensional parameter spaces (the spatially distributed sliding coefficient at the base of the ice) is still computationally intractable. However, under the assumption of Gaussian noise and prior probability densities, and upon linearizing the parameter-to-observable map, the posterior density becomes Gaussian. The posterior covariance is given by the inverse of the Hessian of the regularized data misfit objective functional. An essential task therefore is to rapidly perform algebraic operations, i.e., manipulation of the Hessian (and its square root and inverse) actions. To do so we exploit the local sensitivity of the data to parameters that suggests an off-diagonal low-rank structure, and build a hierarchically off-diagonal low-rank (HODLR) approximation of the Hessian(-apply). We apply this HODLR Hessian approximation as a preconditioner for the Newton-CG optimization solver, and to build a Gaussian approximation of the posterior for the inference of the basal sliding coefficient field in an ice sheet flow model.
P11: A computational framework for quantifying the relative importance of data sources and physical parameters in PDE-based inverse problems
Isaac Sunseri | North Carolina State University | United States
High fidelity models used in many science and engineering applications couple multiple physical states and parameters. Inverse problems arise when a model parameter cannot be determined directly, but rather is estimated using (typically sparse and noisy) measurements of the states. The model parameters and data states correspond to various physical quantities that are often in widely diverse spaces. In addition to the inversion parameters, the governing model typically contains additional parameters that are need for a full model specification. We refer to the combination of the additional model parameters and the measured data states as the "auxiliary parameters". We seek to quantify the relative importance of these various auxiliary parameters to the solution of the inverse problem. To address this, we present a framework based on hyper-differential sensitivity analysis (HDSA). Informally, HDSA computes the derivative of the solution of an inverse problem with respect to auxiliary parameters. We present a mathematical framework for HDSA in large-scale PDE-based inverse problems and show how HDSA can be interpreted to give insight about the inverse problem. We demonstrate the effectiveness of the method on an inverse problem by estimating a permeability field, using pressure and concentration measurements, in a porous medium flow application.
P12: Dealing with measurement uncertainties in Bayesian model calibration
Kellin Rumsey | University of New Mexico | United States
In the presence of model discrepancy, the calibration of physics-based models for physical parameter inference is a challenging problem. Lack of identifiability between calibration parameters and model discrepancy requires additional identifiability constraints to be placed on the model discrepancy to obtain unique physical parameter estimates. If these assumptions are violated, the inference for the calibration parameters can be systematically biased. In many applications, such as in dynamic material property experiments, many of the calibration inputs refer to measurement uncertainties. In this setting, we develop a metric for identifying overfitting of these measurement uncertainties, propose a prior capable of reducing this overfitting and show how this leads to a diagnostic tool for validation of physical parameter inference. The approach is demonstrated for two simple examples, and applied to a material property application to perform inference on the equation of state parameters of tantalum.
P13: Numerical approximations for Stein variational Newton transport approaches to Bayesian inversion
Keyi Wu | University of Texas at Austin | United States
We investigate the ill-conditioning nature of Stein variational Newton (SVN) transport maps for Bayesian inversion with application to multi-modal or highly non-Gaussian posteriors in the context of a Gaussian prior. The SVN algorithm constructs a surrogate for the posterior when viewed as composition of a series of transport maps pushing forward the prior to the posterior. This allows one to sample from the posterior arbitrarily many times, rather than being restricted to the samples produced by the SVN algorithm alone. However, to ensure that the surrogate posterior is accurate, it is necessary that the transport maps possess certain properties, such as invertibility and smoothness. We investigate the numerical difficulties that arise when one seeks to ensure that the transport maps formed by SVN are oriented diffeomorphic maps supported on kernel bases. Particularly, we investigate ill-conditioning of the Hessian of the KL divergence with respect to the maps, its relationship to kernel properties, and linear algebraic approaches to regularize and alleviate the inherent ill-posedness of the discretized problem. We solve a Bayesian inversion problem with a Helmholtz equation forward model for numerical illustrations.
P14: Probabilistic calibration method for heterogeneous material models
Eliška Janouchová | Czech Technical University in Prague, Faculty of Civil Engineering | Czech Republic
Experimental data obtained during testing a heterogeneous material comprise uncertainties of both principal types, i.e. epistemic as well as aleatory uncertainty. The calibration of the corresponding material model from these noisy indirect measurements requires a correct treatment of the involved uncertainties. The aleatory uncertainty represents natural variability or randomness of the investigated material properties, which originates from the data collection, when the data are singled out e.g. from different locations or specimens and modelled as a random variable. The unknown but fixed probability density function of the material model inputs is estimated using the Bayesian inference together with quantifying epistemic uncertainties.
P15: Inverse Analysis Using Bayes Method
Liya Gaynutdinova | CTU in Prague | Czech Republic
A probabilistic approach for describing input parameters and inverse analysis using Bayesian inference is considered. We compare two approaches in terms of computational time and accuracy: (a) executing the forward model with random fields, and evaluating response statistics; (b) construction of a surrogate model using Monte Carlo simulation, collocation (based on previous computations), or stochastic Galerkin method (a non-sampling approach requiring dedicated implementation), and using accelerated Bayesian inference with the surrogate model. As a model example, we use parameter identification of a scalar diffusion equation describing heat transfer, where the parameters take the form of the uniformly positive definite coefficient matrix. The parameters are to be identified based on known values of integrated outflows on some parts of the boundary. The problems are discretized by the finite element method. Numerical experiments and some unresolved issues are presented.
P16: Uncertainty Quantification for Time-Dependent Flat-Field Correction in Absorption Tomography
Katrine Bangsgaard | Technical University of Denmark - DTU | Denmark
In absorption tomography most reconstruction methods are based on the assumption that the detector response is known and the source intensity is static, i.e. does not change during the scan. This is not always the case in practice; then the detector response must be estimated by flat-field measurements and variations in the source intensity may occur. Both may lead to systematic errors in the data if not accounted for.
We propose a new convex reconstruction model that simultaneously estimates the reconstruction, detector response, and time-dependent source intensity by carefully modeling the statistical nature of the measurements and introducing suitable priors for the time-dependent source intensity. In particular, we apply uncertainty quantification to analyze the model uncertainty.
Discrepancies between the true and estimated detector response are especially problematic when the acquisition time or the dose is limited. Preliminary experiments show that the proposed reconstruction model leads to reduction of systematic errors for dose-limited absorption tomography.
P17: Identifying damage in sea ice from sparse laser strain measurements
Victor Churchill | Dartmouth College | United States
We discuss several methods for identifying damage in sea ice when given laser strain or displacement measurements at only a few sparse locations in the domain of interest. We begin by modifying the equations of linear elasticity in order to account for damage in the displacement field. We then present a standard method for solving an inverse problem of this type which minimizes a data misfit cost function that is constrained by the aforementioned partial differential equations. We consider several regularization schemes for this method. Finally, we propose a method which minimizes an unconstrained cost function with respect to two variables via alternating minimization. Our results using both simulated and real data suggest that this method, which allows for variation away from both the given data as well as the model, is promising.
- CANCELED - P18: On Gaussian Mixture Approximations of the Standard Normal: Application to an Advection Diffusion Inverse Problem with Model Uncertainty
Dingcheng Luo | Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin | United States
We present approximations of the Standard Normal (SN) density via Gaussian mixtures (GM) in one dimension. These GMs match the first two moments of the SN exactly and possess error that can be acceptable for many applications in uncertainty quantification. In particular, we build a dictionary of homoscedastic with large numbers of component GMs which approximate the SN density. We present optimization methods used to find the GMs and discuss the errors with respect to various metrics. The GMs are used to create surrogate random fields based on projection-based substitutions in the Karhunen-Loeve expansion of Gaussian random fields. While the approximation error is usually acceptable, we can introduce a correction factor using importance sampling. We demonstrate the applicability of the GMs towards the newly proposed multi local-fidelity Monte Carlo (MLFMC) framework, particularly using local Taylor approximation control variate strategies to marginalize out model uncertainty in an advection-diffusion equation as part of an inverse problem for a log-permeability field modeling the flow of contaminant in a subsurface with uncertain anisotropic permeability.
P19: Non-intrusive Parameter Identification of Coupled Heat and Moisture Transport
Jan Sykora | CTU in Prague | Czech Republic
In many fields it is advantageous to analyze the construction or material sample without intervening into the structure itself. This contribution presents such numerical procedure relying solely on data gathered on boundary. Our interest is focused towards building materials and their properties when exposed to coupled heat and moisture transport. As a material model, we introduce Künzel’s transport model for its relative simplicity and sufficient accuracy to describe the underlying physical phenomena of coupled transport processes. The material model parameters are identified from the real climatic boundary conditions considering variety of domain shapes and parameter settings.
P20: An Extensible Software Framework for Large-Scale Inverse Problems Governed by Partial Differential Equations
Ki-Tae Kim | University of California, Merced, Applied Mathematics | United States
We present an Inverse Problem PYthon library (hIPPYlib) for solving large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs). hIPPYlib overcomes the prohibitive nature of Bayesian inversion for this class of problems by
implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators. The key property of the algorithms implemented in hIPPYlib is that the solution of the deterministic and linearized Bayesian inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior via an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. A low-rank approximation of the Hessian is constructed by
employing randomized eigensolvers and scalable algorithms for sample generation and computing pointwise variance fields are also implemented. hIPPYlib makes these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms. hIPPYlib is also a teaching tool that can be used to educate students and practitioners who are new to inverse problems and to the Bayesian inference
P21: Bayesian calibration using inconsistent nuclear data
Cedric Durantin | CEA/DAM | France
This work is dedicated to the quantification of uncertainties for the prediction of nuclear data (cross-section, neutron multiplicity, ...). Experimental measurements are available to calibrate the prediction of a physical model. Unfortunately, these data are inconsistent: for the same system, measurements have error bars that do not match. These measurements come from different bibliographic sources, so it is necessary to determine which groups of measurements have a bias or underestimation of the measurement error. A recent work by Schnabel  proposes to automatically identify the inconsistency of the data during calibration with the parsimonious attribution of a specific error to the corresponding data groups. This methodology is used in a general Bayesian framework  to quantify the uncertainty of the neutron multiplicity from a hierarchical model involving two physical models with associated experimental data.
 Georg Schnabel, Fitting and Analysis Technique for Inconsistent Nuclear Data, M&C 2017.
 Kennedy, M. C. and O'Hagan, A. (2001), Bayesian calibration of computer models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63: 425-464.
P22: Parameter Estimation using Wasserstein distance
James Ronan | Dartmouth College | United States
By using the Wasserstein distance we are able to compare images to outputs of a model of the scene. This provides a consistent way of estimating the error from the model that is more physically meaningful than an L2 distance. Using semi-discrete optimal transport, we are able to implement the Wasserstein distance to calibrate the parameters of our model from the data. Because the "shape mismatch" between our discrete version of the scene is constant, the minimization over this simple version aligns with the full problem. This technique can be used for a variety of physical models.
P23: A multiscale reduced basis method for Schrodinger equation with multiscale and random potentials
Dingjiong Ma | The University of Hong Kong | Hong Kong
The semiclassical Schrodinger equation with multiscale and random potentials often appears when studying electron dynamics in heterogeneous quantum systems. As time evolves, the wavefunction develops high-frequency oscillations in both the physical space and the random space, which poses severe challenges for numerical methods. To address this problem, in this paper we propose a multiscale reduced basis method, where we construct multiscale reduced basis functions using an optimization method and the proper orthogonal decomposition method in the physical space and employ the quasi-Monte Carlo method in the random space. Our method is verified to be efficient: the spatial grid size is only proportional to the semiclassical parameter and (under suitable conditions) almost first order convergence rate is achieved in the random space with respect to the sample number. Several theoretical aspects of the proposed method, including how to determine the number of samples in the construction of multiscale reduced basis and convergence analysis, are studied with numerical justification. In addition, we investigate the Anderson localization phenomena for Schrodinger equation with correlated random potentials in both 1D and 2D space.
P24: Global Sensitivity Analysis of Chemical Reaction Networks Across Physical Scales
Michael Merritt | NC State University, Department of Mathematics | United States
Stochastic models for chemical reaction networks are often exact solution methods for the Chemical Master Equation. Performing Global Sensitivity Analysis (GSA) with such models via Monte Carlo sampling is typically expensive, especially for models with either large numbers of species or reaction channels. It is thus advantageous to consider a cheaper, deterministic model when performing GSA. We investigate the connection between Sobol' Indices for stochastic and deterministic models and examine the conditions under which the sensitivity measures for the expensive models can be approximated by those of the more efficient model. We include numerical results demonstrating the benefits of this approach, when applied to the Michaelis-Menten reaction system.
P25: A robust stochastic structure-preserving Lagrangian scheme in computing effective diffusivity of 3D time-dependent flows
Zhongjian Wang | The University of Hong Kong | Hong Kong
In this paper, we propose a robust stochastic structure-preserving Lagrangian scheme in computing effective diffusivity of time-dependent chaotic flow problems, which are described by stochastic differential equations (SDEs). Our numerical scheme is based on a splitting method to solve the corresponding SDEs in which the deterministic subproblem is discretized using structure-preserving schemes while the random subproblem is discretized using the Euler- Maruyama scheme. By exploring the intrinsic diffusion nature of the solutions, we develop a probabilistic approach to analyze the errors of our numerical schemes. We obtain a sharp and uniform-in-time convergence analysis for the proposed numerical scheme that allows us to accurately compute long-time solutions of the SDEs. As such, we can compute the effective diffusivity for time-dependent flow problems. Finally, we present numerical results to demonstrate the accuracy and efficiency of the proposed method in computing effective diffusivity for the time-dependent Arnold-Beltrami-Childress (ABC) flow and time-dependent Kolmogorov flow in three-dimensional space.
P26: Characterising the uncertainty of advection-dominated solute transport in a spatially disordered domain
George Price | The University of Manchester | United Kingdom
Solute transport in nature often takes place in spatially complex domains with inherently random structures. To evaluate the effects of stochastic variability, we study a simple model involving unidirectional flow past an array of randomly located point sinks, which remove solute via first-order kinetics. We focus on the case where advection dominates diffusion, which causes concentration fields to become non-smooth, with boundary layers forming upstream of each sink. We extend a homogenization approximation (M.J.Russell & O.E.Jensen, IMA J. Appl. Math. 2019) using a Green's function method to evaluate corrections arising due to discrete and disordered sink locations, which is tested against Monte Carlo simulation. The solute field develops a staircase structure, with (co)variances that are either smooth or exhibit spikes, depending on the degree of disorder in sink locations. This approach allows us to characterise uncertainty in net solute transfer in models of physiological transport.
- CANCELED - P27: Learning Low-Complexity Autoregressive Models via Proximal Alternating Minimization
Fu Lin | United Technologies Research Center | United States
We consider the estimation of the state transition matrix in vector autoregressive models, when time sequence data is limited but nonsequence steady-state data is abundant. To leverage both sources of data, we formulate the least squares minimization problem regularized by a Lyapunov penalty. We impose cardinality or rank constraints to reduce the complexity of the autoregressive model. The resulting nonconvex, nonsmooth problem is solved by using the proximal alternating linearization method (PALM). We prove that PALM is globally convergent to a critical point and that the estimation error monotonically decreases. Explicit formulas are obtained for the proximal operators to facilitate the implementation of PALM. We demonstrate the effectiveness of the developed method by numerical experiments.
P28: Evidence-based likelihoods and uncertainties improve calibration and evaluation of mechanistic epidemiological simulations
Albert Lee | Institute for Disease Modeling | United States
As genetics continues to play an increasing role in disease surveillance programs, a systematic and robust method for evaluating and calibrating epidemiological models is necessary. Through our work with dynamic models for malaria transmission, we present a framework for approximate Bayesian computation that has been tailored to the specific needs of model comparison with genomic data and uncertainty quantification of genetic features.
The model is calibrated to genetic data obtained from malaria infections in Thiès, Senegal, over the past 11 years, and insights from the model inform future surveillance strategies of national malaria control programs. We use kernel density estimators to calculate the likelihood of models based on the distribution of summary statistics, and we employ an incremental mixture importance sampling (IMIS) procedure to fit the model.
For both methods we show how the mechanics of a model can be used to rationally inform implementation decisions, as well as the assumptions going into the model. We further show how detailed analyses of the uncertainties in summary statistics can address scientific questions regarding the genetic basis of the model. Finally, we propose that these conclusions may help us to more effectively integrate genetics into survey designs.
P29: Physics-Informed Neural Networks for Stochastic High-Frequency Electromagnetics
Ion Gabriel Ion | TU Darmstadt | Germany
We are interested in studying the frequency response of a high-frequency electromagnetic device under the influence of random geometrical deviations. Due to the computational complexity of electromagnetic field simulations, we aim to construct a surrogate model which provides a good approximation to the original model and is computationally inexpensive to evaluate. The classical methods for building surrogate models use the field solver as a blackbox and do not make use of the governing physical laws. In this work, we use physics-informed artificial neural networks for the approximation of the frequency-dependent scattering parameter of a waveguide. The inputs of the neural network are the random geometrical parameters and the frequency. The partial differential equation, as well as the excitation field are embedded in the loss function. We show that physics-informed neural networks offer an alternative method for building surrogate models with a reduced number of field solver calls.
P30: Parameter estimation for an elastic rod model to determine the mechanical properties of leaves from raster images
Michael Thomson | University of Nottingham | United Kingdom
It is common for long slender objects in engineering and biology to be modelled as an elastic rod, defined as a high-order system of nonlinear differential equations depending upon several mechanical and geometric parameters. Estimation of these parameters is complicated by the system having multiple solutions requiring manual identification of the optimal solution branch. Furthermore, the independent variable (centreline arclength) and some dependent variables (angle and curvature) are internal to the rod and cannot be observed directly in an observed noisy representation such as a raster image. To help understand the impact of mechanical properties on light interception and crop photosynthesis, we propose a method, adapted for images of slender wheat leaves, that resolves these issues by approximating the rod using splines. An initial data-driven representation of the rod and the independent variable is found using principal curves. Fidelity to the rod model is then enforced by introducing a spline penalty term involving the Euler-Bernoulli equations for a bending rod and alternated with the principal curves projection step to update the independent variable. The weight of this penalty term is increased in a process of continuation, providing automatic determination of the optimal solution branch and estimates of model parameters. We show how using these methods the rod model can be fitted to image data of leaves enabling the determination of their mechanical properties.
P31: Fusing Optimal Uncertainty Quantification with low-rank decomposition techniques
Jinwoo Go | Georgia Institute of Technology | United States
Bayesian inference can be “brittle” in high-dimensional parameters. To reduce this weakness in large scale Bayesian Inversion, we fuse Optimal Uncertainty Quantification (OUQ) with low-rank decomposition. OUQ suggest an evaluation method for Bayesian Inference by characterizing the range of posterior prediction in an acceptable prior class to Bayesian priors. We combine the OUQ with low-rank decomposition to address problems of much greater scale, such as uncertain fields. To be specific, we approximate the covariance of the parameter of the fields using low-rank decomposition and use the identified subspace to get a range of predictions in both deterministic and statistical way.
P32: Sensitivity Analysis for Microscopic Crowd Simulation
Marion Goedel | Technical University of Munich | Germany
Microscopic crowd simulation can help to enhance the safety of pedestrians in situations ranging from museum visits to music festivals. To obtain a useful prediction, the input parameters must be chosen carefully. In many cases, lack of knowledge or limited measurement accuracy add uncertainty to the input. In addition, for meaningful parameter studies, we need to identify the most influential parameters. Sensitivity analysis is a powerful tool to quantify the impact of uncertain input parameters.
So far, the majority of sensitivity analyses performed for microscopic crowd simulations are carried out utilizing various semi-manual methods. Uncertainty quantification, on the other hand offers standardized and fully automatized approaches that we believe to be beneficial for pedestrian dynamics, namely, global sensitivity analysis to identify influential input parameters.
We first perform a global sensitivity analysis using Sobol’ indices and then crosscheck the results by identifying important parameter directions with active subspaces. We apply both methods to a typical scenario for crowd simulation, a bottleneck. Since the constriction can lead to high crowd densities and delays in evacuations, several experiments and simulation studies have been conducted for this topography. We show qualitative agreement between the results of both methods. Moreover, we analyze and interpret the sensitivities with respect to the chosen locomotion model and application.
P33: Uncertainty quantification in contaminated water treatment techniques evaluation using Pythagorean fuzzy TOPSIS through biparametric distance measure
Animesh Biswas | University of Kalyani | India
The uncertainties in decision making processes relating to environment issues could be inherent to natural inconsistencies like hydrological and climatic variations or related to human activities. Although removing of pollutants from contaminated water may affect the quality of natural water, several techniques are used to purify the water. To handle uncertainties in decision making processes, Pythagorean fuzzy set (PFS) appeared as an efficient tool. In order to quantify uncertainties in choosing the most suitable techniques for removing pollutants, like cadmium from water, decision data are taken in linguistic forms and evaluating them by transforming their Pythagorean fuzzy equivalents. The technique for order preference by similarity to ideal solution is applied through a newly introduced Pythagorean fuzzy weighted generalized distance measure with two parameters and an entropy measure for computing the weights of decision makers and different criteria, viz., technology, material, economical issues, social and environmental impacts, waste disposal, etc. of the available technologies, viz., chemical precipitation, flotation, ion exchange, adsorption, membrane filtration. After going through the method hydroxide precipitation is found as the best appropriate technique due to its relative simplicity and high maturity than the others. Also, the achieved results are compared with the existing methods and the proposed methodology is found as better than existing one.
P34: Beyond the probabilistic interpretation of ensemble predictions for nonlinear dynamical systems: Can we go possibilistic?
Noémie Le Carrer | Institute for Risk and Uncertainty, University of Liverpool | United Kingdom
Ensemble forecasting has gained popularity in the field of numerical medium-range weather prediction as a means of handling the limitations inherent to predicting the behaviour of a high dimensional, nonlinear system, showing high sensitivity to initial conditions. Small strategical perturbations of the initial conditions, and in some cases, stochastic parameterization schemes of the dynamical equations allow to sample the possible future scenari in a Monte Carlo-like fashion. Results are generally interpreted in a probabilistic way by fitting a probability density function to the ensemble. However, this probabilistic interpretation is regularly criticized for not being reliable, especially for predicting extreme events because of the chaotic nature of the dynamic of the atmospheric system, model error and the fact that ensemble of forecasts are not, in reality, produced in a probabilistic manner. In this work, we study whether a possibilistic and causal framework is an interesting alternative, first in terms of predictive performances, but also in terms of theoretical guarantees and explicative power. We test our approach on the Lorenz 96 model, which is a low-dimensional surrogate model of the nonlinear atmospheric dynamics. To discuss the advantages and limitations of our framework with respect to the results of the standard probabilistic approach, we compute standard skill metrics for assessing extreme event predictions, or for assessing their value for a given user.
P35: Generalised Langevin equation with simulated annealing: convergence in probability to global optimum
Martin Chak | Imperial College London | United Kingdom
One way to find the global minimum of a nonconvex function is to use overdamped Langevin dynamics, also known as Brownian dynamics, with a decreasing noise term, realising a form of simulated annealing. For the case where the function is a quadratic, it can be explicitly shown that there can be an advantage to using the underdamped Langevin dynamics rather than the overdamped dynamics depending on the strength of the quadratic. There was recent work by Monmarché that applies simulated annealing to the underdamped dynamics which incorporates momentum to overcome local minima even without noise. This work extends the idea to an approximation of generalised Langevin dynamics, which adds yet another auxiliary momentum variable to the underdamped Langevin dynamics, amounting to convergence in probability to the global minimum with a quantitative rate; alongside is evidence to suggest that exploration of the state space is increased considerably.
- CANCELED - P36: Can fire sales risk be assessed based on partial information?
Raymond Pang | London School of Economics | United Kingdom
This paper assesses the effects of fire sales for bipartite networks for banks and assets. These types of networks are associated to risks seen in the 2008 financial crises such as the contagion of defaults for mortgage bonds. In the first part we use data from the European Banking Authority’s stress tests to assess fire sales risk for various years. In the second part we analyse how well such an analysis can be done if only the partial information is available for the underlying financial network. We test several approaches to deal with the partial information for fire sales stress tests and find that the risk from fire sales decreases over time. We are also able to describe this trend if only the partial information is used in stress tests.
- CANCELED - P37: Generalized Kernel-Based DMD and Non-Linear Reduced Modeling
Patrick Héas | INRIA | France
Extended dynamic mode decomposition (EDMD) is a data-driven reduced model designed for non-linear dynamic systems. Based on a set of representative trajectories, it consists in learning a low-rank linear operator approximating trajectory states that have previously been immersed by a non-linear mapping in a high-dimensional Hilbert space. Computing a reduced model using this high-dimensional embedding is difficult since neither operator inference nor recursion or non-linear mapping are tractable in a general setting. To date, state-of-the-art algorithms are computationally demanding or rely on restrictive assumptions.
In this work, we focus on non-linear mappings onto reproducing kernel Hilbert spaces and
propose a generalization of the so-called “kernel-based DMD” method and improve its computational complexity. In particular, we show how to use the possibly infinite-dimensional operator, solution of a constrained optimization problem, in order to compute efficiently and exactly the reduced model. More precisely, the proposed algorithm has a complexity linear in the ambient dimension, and independent of the Hilbert space dimension and of the trajectory length.
Using synthetic data, we show that the proposed algorithm reduces approximation errors compared to state-of-the-art algorithms, while being more robust to noise and overfitting.
P38: Spatio-temporal kriging from CloudSat observations
Pierre Minvielle | CEA/DAM | France
Spatiotemporal statistical learning has received increased attention in the past decade, due to spatially and temporally indexed data proliferation, especially collected from satellite remote sensing. Observational studies of clouds are recognized as an important step to improve cloud representation in weather and climate models. Since 2006, the satellite CloudSat of NASA carries a 94 GHz cloud profiling radar and is able to retrieve, from radar reflectivity, microphysical parameter distribution such as water or ice content. The collected data is piled up with the successive satellite orbits of nearly 2 hours, leading to a large 2 TO database.
To go further than cloud analysis, it is interesting to be able to interpolate, in space, and to predict, in time, the cloud microphysics in medium/long term. Since an accurate estimation is obviously unattainable, it is an issue of uncertainty quantification. Starting from a data exploratory analysis, we have recently initiated a statistical kriging-based approach that is able to interpolate/predict from the dataset and provide uncertainties. Beforehand, it requires in particular estimating the parameters of the spatio-temporal covariance model; it is performed in a Bayesian setting, which allows for estimation and uncertainties quantification. The approach is then applied to a subset of the CloudSat dataset. In the future, a hierarchical model could be developed, taking into account spatiotemporal correlations and uncertainties.
P39: Bayesian Optimization for Decision Support with Incomplete Risk Preferences
Raul Astudillo | Cornell University | United States
Inspired by a collaboration with a major energy company, we consider decision-making based on expensive computational experiments when there is uncertainty about both environmental variables and the decision-maker’s preferences. This arises, for example, when using a high-fidelity simulator to decide where to drill for oil subject to geological uncertainty with an incomplete description of the decision-maker’s risk preferences. A typical approach would be to estimate the decision-maker’s preferences over uncertain future financial payoffs using standard gamble queries and then perform single-objective optimization. However, errors in risk preference estimation may yield a poor suggested decision. Furthermore, decision-makers often desire more control and may not trust a single suggested decision. Here, we propose an approach that helps the decision-maker find the best decision by repeatedly (1) choosing a decision and environment to evaluate through an expensive computation; and (2) choosing decisions, from among those evaluated thus far, over which to further elicit the decision-maker's preferences. After this process is complete, a menu of designs is shown to the decision-maker who makes a final selection. Our main contribution is a novel Bayesian optimization algorithm that simultaneously builds a metamodel for the simulator and learns the decision-maker’s preferences to maximize the expected utility of the best design in the final menu.
P40: Comparison of different regression techniques for estimating the conditional probability distribution of a censored response - application to marine flooding
Jeremy Rohmer | BRGM | France
In many forecasting problems, predicting the conditional mean E(Y|X) of the variable of interest Y given some predictor variables X is not informative enough to support decision-making. Several statistical techniques have been developed to provide richer information in the form of the full conditional probability distribution F(Y|X). The available methods include: parametric distributions within the setting of generalized additive models for location, scale & shape GAMLSS, distribution regression and transformation models (the Box-Cox transformation being a popular example), and quantile regression models, e.g. based on non-parametric techniques like quantile random forests QRF. Estimating F(Y|X) is however more tedious when Y involves large fractions of zeros and is limited to non‐negative values, i.e. when Y is left-censored at zero. Motivated by the distribution estimate for coastal flooding indicators induced by tide and storms, we aim at assessing the impact of different variants of these methods, namely: the use of non- and parametric methods (polynomial, splines and trees), the choice in the parametric probability distribution for GAMLSS (zero left‐censored normal distribution, zero adjusted gamma distribution), the optimisation of the QRF splitting rule, and the effect of sample size. The comparison addresses the pros and cons w.r.t. predictability (measured by continuous ranked probability score and Wasserstein distance), flexibility and computational costs.
P41: Parameter Estimation and Uncertainty Quantification for a Global Marine Biogeochemical Model
Joscha Reimer | Kiel University | Germany
Our focus is on a complex marine biogeochemical model. Millions of measurement data are available for calibrating this model. The corresponding measurement errors have varying standard deviations and are usually correlated. We have used the generalized least squares estimator and derivative based global optimization algorithms to estimate the parameters of the model.
The uncertainties regarding the model parameters as well as the model outputs were quantified using approximations of the covariance matrix of the corresponding estimators and resulting confidence intervals. Three different ways to approximate the covariance matrices, based on derivatives of the model regarding its parameters, were used and compared.
The reduction of the uncertainty by further measurements was estimated as well and additional measurements were planned using optimal experimental design methods. We would like to present the obtained results together with the applied techniques for parameter estimation, uncertainty quantification and experimental designs.
- CANCELED - P42: The impact of unexpected drilling events and their corresponding geological uncertainty to reservoir prediction
Kurt Rachares Petvipusit | Equinor ASA | Norway
It has been shown that the lack of considering the uncertainty of drilling aspects in the reservoir model can lead to poor reservoir prediction. We have experienced that the unexpected drilling events, such as drilling time, platform capacity, drilling windows, production/drilling start time, well trajectories, and production/injection schedule, can have a significant impact to the production profile. We modeled the unexpected drilling events as drilling uncertainty in the reservoir model. In this work, we study the impact of such drilling uncertainty together with geological uncertainty of the reservoir model. We show that main and interaction effect of drilling and geological parameters to the production profile can be identified. With the proposed workflow, we can also reveal the hidden features that have a significant impact to the reservoir prediction.
P43: Machine Learning Uncertainty Representations: modeling random variables via conditional density estimation
Ricardo Baptista | MIT | United States
Non-intrusive methods are a popular approach for modeling the output random variables of complex engineering systems. These techniques build functional approximations to the input-output mapping from limited sample evaluations of the system using e.g., spectral-projection or least-squares fits. While these methods enjoy nice convergence properties for certain classes of problems, they suffer in practice from i) using a fixed family of functions that are often poorly suited to many scenarios (e.g., polynomials for discontinuous mappings), ii) limited guidance on how to preform model selection (e.g., to select the polynomial order), and iii) no quantification of the epistemic uncertainty in the mapping arising from limited data and a finite-cardinality class of functions. To address these deficiencies, this poster presents a statistical learning framework entitled Machine Learning Uncertainty Representations (MLUR). Instead of directly approximating the input-output mapping, this framework models the conditional density for the output random variables give the underlying "germ" random variables. We parameterize these conditional densities using different response surfaces (e.g., polynomials or neural networks) and estimate their coefficients using maximum likelihood estimation together with a cross-validation procedure for model selection. The MLUR framework demonstrates improve estimates for the output statistics on various test functions and on data from a PDE model.
P44: A statistical framework to deal with model discrepancy in Zika virus dynamics
Michel Tosin | Rio de Janeiro State University | Brazil
Physical laws are great allies in the construction of predictive models for several phenomena. Unfortunately, for epidemic systems, the invariance principles that give rise to the dynamic evolution laws are not so well understood, so that most of epidemic models are built on ad hoc considerations, which induce epistemic uncertainties, and, consequently, limitations in their predictive ability. These limitations can be circumvented by using a probabilistic approach, that consider the model discrepancy, i.e., the difference between real system and model responses, as a stochastic object. In this sense, the present work employs a Bayesian framework for Zika virus dynamics, where the discrepancy is encapsulated in the model parameters, which are estimated through a process of identifying the coefficients of a polynomial chaos-based metamodel. The key idea is to represent the random vector, which lumps the model parameters, by a polynomial chaos expansion, and consider the underlying coefficients as random variables, identified by a process of Bayesian inversion. In this statistical procedure, the prior distribution for the coefficients is constructed based on known information, with aid of the maximum entropy principle, and the effect of several Likelihoods functions is investigated.
P45: Solving inverse problems with multilinear metamodel for costly experiments
Jhouben Cuesta Ramirez | CEA LETI & Mines Saint-Etienne | France
Inverse problems consist of determining parameters for which the model output is closest to the observations. In engineering design, the model F is often given by a costly multivariate function representing the physical phenomenon. Thus solving the inverse problem directly on F is intractable. One way to alleviate this problem is to replace F by a metamodel trained with a few well-chosen model evaluations. We focus on multilinear metamodels of the form: F(m, U) = c * m * U, where m is the one dimensional parameter of interest, U is a vector of additional variables and c is a real number. We investigate deterministic and probabilistic approaches. First, when the inputs are continuous, we obtain an analytical expression for a suitable regularized least squares problem. We also propose three different methods based on MC/MCMC to solve the Bayesian inverse problem. Secondly, we explore the extension of the methodology to a mixed (continuous and categorical) inputs setting. All the work is illustrated by applications in optics and nuclear engineering.
- CANCELED - P46: A comparison of random forest and Gaussian process emulator under model uncertainty in reservoir prediction
Mark Ashworth | Heriot-Watt University | United Kingdom
A surrogate modeling technique is often used when accessing or evaluating the performance criteria is computationally expensive. Most reservoir applications require computationally expensive flow simulators to evaluate the performance criteria e.g. oil and gas production or Net Present Value (NPV). This computational burden becomes even more difficult for uncertainty assessment. A surrogate modelling technique is an alternative approach to reduce the computational cost for evaluating or accessing the performance criterion. This work investigates two surrogate modeling techniques of random forest and Gaussian process emulator. We study the characteristics of training sets, sample sizes, sampling strategies to the prediction accuracy of both random forest and Gaussian process emulator with: 1) benchmark functions, 2) reservoir models under uncertainty. The study reviews that the Gaussian process emulator could be used as a surrogate modeling technique for similar reservoir applications.
- CANCELED - P47: Representation, Propagation, and Visualization of Geometric Uncertainty
Daniel Lee | University of Colorado Boulder | United States
Due to inherently uncertain nature of our world, conventional deterministic engineering analyses are inadequate, inaccurate, and inefficient. To gain a better understanding of the inherent uncertainties and stochastic system responses, we investigate an efficient method to obtain statistical properties of system responses from the propagation of geometric uncertainty and propose a novel method to visualize the global stochastic system responses. Utilizing the parametric nature of isogeometric analysis, the geometric uncertainty is represented as a family of geometries. Monte Carlo simulations are performed to obtain the first and second moments of the quantities of interest. For more efficient propagation of uncertainty, a surrogate model is constructed using the non-intrusive polynomial chaos expansion. The stochastic system responses are captured in the parent domain and visualized globally over the entire physical domain in addition to local geometric uncertainty quantification at a point. Numerical tests are performed on two geometries with two, four, and 14 random input variables with uniform and normal distribution. R-squared and Q-squared values are used to validate the polynomial chaos surrogate models, which also prove efficient in predicting the first and second moments of the stochastic system responses. Our novel visualization method provides an effective way to gain a global understanding of geometric uncertainty and its stochastic system responses.
P48: Generalizing the unscented ensemble transform to higher moments
Deanna Easley | George Mason University | United States
We develop a new approach for estimating the expected values of functionals on multivariate non-Gaussian distributions. Rather than specifying an input distribution, we assume that we are only given the first four moments of the distribution. The goal is to summarize the distribution using a small number of quadrature nodes which are called sigma points. We choose the nodes and weights in order to match the specified moments of the distribution. The classical unscented ensemble matches the mean and covariance of a distribution and in this paper, we generalize the unscented ensemble by accounting for higher moments when creating the sigma points. It turns out that the key to matching higher moments is the rank-1 tensor decomposition. Together with appropriate weights, we can use the sigma points to estimate expected values. By passing the sigma points through a nonlinear function and applying our quadrature rule we can estimate the moments of the output distribution. By matching more moments of the input distribution, we demonstrate reduced error and we derive an upper bound on the error.
P49: Multifidelity Monte Carlo Sampling in Plasma Microturbulence Analysis
Julia Konrad | Technical University of Munich | Germany
Monte Carlo sampling (MC) is an established technique for higher-dimensional uncertainty propagation. However, when dealing with computationally expensive models, MC becomes prohibitive due to its slow convergence rate. Multifidelity Monte Carlo sampling (MFMC) is a method recently developed to improve on this aspect. MFMC uses surrogate (low-fidelity) models in addition to the given high-fidelity model. To estimate statistical moments of the high-fidelity model output, evaluations from both types of models are combined. Here, we focus on a real-world application, the simulation of plasma microturbulence. Driven by high density and temperature gradients, microinstabilities and resulting turbulences complicate the magnetic confinement of fusion plasmas. The physical parameters for the simulation of such plasmas can only be measured with uncertainty and therefore, a UQ approach is needed. Due to the complexity of this application and the high number of stochastic inputs, standard MC is not sufficient. We employ for the first time MFMC in plasma microturbulence analysis, using the plasma turbulence code GENE and data-driven surrogates. In our test cases, estimators for the model output‘s mean obtained by MFMC show a lower mean squared error than standard MC estimators by at least two orders of magnitude, demonstrating the suitability of MFMC for uncertainty propagation in this application. We further show how the selection of low-fidelity models impacts the estimators‘ accuracy.
P50: Acousto-Electric Tomography with uncertain sound speed
Bjørn Jensen | Technical University of Denmark - DTU | Denmark
Acousto-Electric Tomography (AET) relies on the electrical framework of Electrical Impedance Tomography modulated by ultrasound waves. It is an emerging tomographic method based on the coupling of different physical phenomena that can produce high contrast and high resolution images. The modelling of AET often takes several simplifying assumptions. In particular the ultrasound modelling often assumes that the propagating wave front is completely known due to the sound speed in the medium being constant. However in reality small variations in the sound speed occur and they lead to inaccuracies in the reconstruction. In this work we explore the effects of uncertainty in the sound speed. We analyze the reconstruction artifacts when erroneous modelling like this is used.
P51: Imprecise random field analysis with regard to non-linear material behavior
Mona Madlen Dannert | Leibniz Universität Hannover | Germany
Uncertainties can be distinguished into aleatory and epistemic. While the first can be handled by probabilistic methods, the latter require possibilistic approaches such as interval or fuzzy theory.
In finite element analysis (FEA), it is often reasonable to describe material parameters as random fields, which are classically aleatory. However, since some parameters are not easily determined, they can be considered epistemically, leading to imprecise random fields. Those mixed uncertain input parameters can be propagated by probability box (p-box) approach, where the probability is not defined by a certain value but by an upper and lower bound. One approach is to discretize the epistemic uncertainties . However, for non-linear FEA, this becomes costly and it cannot be guaranteed that all crucial values within the interval are considered. Faes and Moens  show that intermediate values are decisive within the resulting p-box, using an example of transient dynamic problems. Furthermore, they propose an optimization approach to find those values a-priori.
Within this work, this optimization approach shall be used to investigate the influence of imprecise random field input parameters within non-linear FEA. For this purpose, the damage evolution of concrete is investigated regarding several imprecise random field inputs.
 M.M. Dannert, R.M.N. Fleury, A. Fau, U. Nackenhorst (2019), Proc. 29th ESREL Conference
 M. Faes, D. Moens (2019), Mech. Syst. Signal Pr. 134
P52: XMC: a modular Python library for hierarchical Monte Carlo methods in distributed environments
Quentin Ayoul-Guilmard | EPFL | Switzerland
Among the many variations of the well-known Monte Carlo (MC) method, hierarchical MC estimators combining samples obtained from multiple approximation models to increase speed or accuracy (e.g. multi-level MC, multi-index MC and multi-fidelity MC methods) have received much attention in recent years. However, efficient and reliable hierarchical MC estimators require careful estimation and tuning of several statistical quantities, as well as adaptive strategies to choose the number of samples to acquire from each model.
We present an open-source Python library for the management of such a hierarchy of approximation models. It provides a generic framework to define hierarchical MC estimators and their sampling strategies. Estimations already implemented comprise expectation, variance, quantile and conditional value at risk. Besides a user interface via configuration files, the modular structure and high-level programming interface gives great flexibility to develop algorithms and use any Python-interfaced solver (Kratos Multiphysics and FEniCS have already been linked). Since such algorithms are obviously parallelisable, the library supports the COMPS superscalar tool developed by the Barcelona Supercomputing Center, for efficient scheduling of parallel tasks in distributed infrastructures, such as clusters or clouds. Finally, we showcase some results from this library for various statistical estimations in problems inspired by engineering applications, such as fluid dynamics.
P53: Derivative-Based Global Sensitivity Analysis for Models with High-Dimensional Inputs and Functional Outputs
Helen Cleaves | North Carolina State University | United States
We address global sensitivity analysis for models with high-dimensional inputs and functional outputs. We propose derivative based global sensitivity measures (DGSMs) for such models, and derive a link between these functional DGSMs and generalized Sobol’ indices for functional outputs. Low-rank approximations of functional QoIs, through their Karhunen–Loeve (KL) representation, are used for output dimension reduction. Moreover, these low-rank KL representations facilitate efficient computation of informative upper bounds on functional Sobol’ indices, which can be used to detect unimportant inputs. Adjoint-based gradient computation is used to ensure computational cost of computing the functional DGSMs does not scale with the input parameter dimension. We illustrate our approach with a nonlinear ODE model for cholera epidemics and for problems of porous medium flow through heterogeneous media.
P54: Uncertainty quantification in the problem of salt contamination of groundwater flow
Alexander Litvinenko | RWTH Aachen | Germany
We solved the density driven groundwater flow problem with uncertain porosity and permeability. An accurate solution of this time-dependent and non-linear problem is impossible because of the presence of natural uncertainties in the reservoir. Therefore, we estimated various statistics of the solution (the mass fraction).
We started by defining the Elder-like problem. Then we described the multi-variate polynomial approximation (gPC) approach and used it to estimate the required statistics. Utilizing the gPC method allowed us to reduce the computational cost compared to the classical qMC method. Many factors, such as non-linearity, multiple solutions, multiple stationary states, time dependence and complicated solvers, make the investigation of the convergence of the gPC method a non-trivial task. In Numerics we considered two different aquifers - a solid parallelepiped and a solid elliptic cylinder. The number of cells varied from 0.2M to 15M for the cylindrical domain and from 0.5M to 4M for the parallelepiped.
 A. Litvinenko, D. Logashenko, R. Tempone, G. Wittum, D. Keyes, Propagation of uncertainties in density-driven flow, arXiv preprint:1905.01770, 2019
 A. Litvinenko, D. Logashenko, R. Tempone, G. Wittum, D. Keyes, Solution of the 3D density-driven groundwater flow problem with uncertain porosity and permeability, preprint arXiv:1906.01632, 2019
P55: Towards The Computation Of An Affordable Sensitivity Of A Large Eddy Simulation
Walter Arias-Ramirez | University of Maryland, College Park | United States
The objective of this work is to develop a methodology to compute the approximate sensitivity of engineering quantities of interest (QoI) from a large eddy simulation (LES) to variations in the problem parameters. We solve a reduced-order problem which is not chaotic to compute the change in the QoI based on the variation in one problem parameter. For modeling the closure we provide an inferred eddy viscosity using two different strategies. The main test case will be the flow over an airfoil, with the QoI taken as the lift and/or drag. The parameter space is taken as the angle-of-attack and the Reynolds number. Here, we assessed the accuracy of the frozen eddy viscosity assumption for different parameter and QoI.
P56: Inferring an effective eddy viscosity from High Fidelity Turbulence Data
Nikhil Oberoi | University of Maryland, College Park | United States
The turbulence closure problem requires an eddy viscosity to model the second-order moments of velocity in the governing equations for mean velocity. The objective of this work is to infer eddy viscosity from resolved turbulence data (from LES, DNS, experiments) based on optimization methods and assess them based on how well they reproduce the corresponding mean fields. The method is demonstrated on wall-bounded flows including a channel, a boundary layer over a flat plate and flow over an airfoil. We further investigate the sensitivity of these methods in inferring the eddy viscosity to averaging error in computing the required first and second-order moments to infer the eddy viscosity.
- CANCELED - P57: BOLD.R: A software package to interface directly with BOLD through R
Nishan Mudalige | University of Guelph | Canada
DNA barcoding was developed as a reliable system to identify species. The Centre for Biodiversity Genomics utilizes DNA barcoding to analyze samples submitted by researchers and institutions around the world. The information extracted from the DNA barcoding process is continuously cataloged on the Barcode of Life Data system (BOLD). BOLD is the workbench and mass storage system for many projects related to genetic barcoding, ecology, conservation, population genetics, evolutionary biology, bioinformatics etc. The BOLD ID engine implements an advanced machine-learning algorithm which classifies species based on DNA barcodes while accounting for uncertainty in the extraction process. Advances in DNA analysis have led to a rapid increase in the amount of data available to study and modern statistical techniques are consequently playing an increasingly important role in the analysis of such large volumes of data. Existing methods to read data from BOLD into statistical software are inconvenient, time-consuming or provide limited information. One of the most popular statistical packages is R and we developed an R library called BOLD.R which overcomes existing barriers by allowing users to access data from BOLD directly into R. In this poster we present the implementation and benefits of BOLD.R, we validate the application of the clustering algorithm in relation to accounting for uncertainty, we provide illustrative examples and we present a roadmap for the future.
P58: Local and global sensitivity analyses of thermal consolidation around a point heat source
Aqeel Afzal Chaudhry | Technische Universität Bergakademie Freiberg | Germany
Coupled thermo-hydro-mechanical (THM) models are used for the assessment of nuclear waste disposal, reservoir engineering and geotechnical engineering. Model-based decision making and optimization require sensitivity analyses (SA) and uncertainty quantification (UQ). Assessment of different UQ and SA methods that work for coupled THM problems on an engineering scale is required. Due to different coupling levels, non-linearities, and large system sizes these analyses can be challenging. For an initial screening, it is advantageous to have an analytical solution that encompasses the most relevant primary couplings, can robustly cover the entire parameter space and remains computationally inexpensive.
Booker and Savvidou (1985) and Chaudhry et al. (2019) provided such an analytical solution for consolidation around a point heat source. We compared the different approaches to sensitivity analyses: local (OVAT) and global sensitivity analysis (GSA) based on Sobol indices for different spatio-temporal settings to observe near and far-field effects as well as early- and late-stage system response. We show parameters and interactions that control the results in these different domains and provide physical interpretations. We provide application-oriented conclusions on the conditions which should be met when applying the different methods and examples for possible misinterpretations. The analysis can serve as a benchmark for UQ and SA software designed around numerical THM simulators.
- CANCELED - P59: Dimension-reduced UQ for Radiation Spectra
Brian M. Adams | Sandia National Laboratories | United States
This talk summarizes an approach to represent and propagate radiation source uncertainty for deterministic Boltzmann transport simulations. The source spectrum input to deterministic radiation transport codes such as Sandia's SCEPTRE is discretized over energy levels (functional inputs), must be non-negative, and potentially respect other physical constraints. The proposed approach begins with experimental observations from a differential absorption spectrometer, where each channel might have different measurement uncertainty. Realizations of these dosimetry uncertainties are mapped through a calibration process to discrete spectra for input to the transport code. Non-negative matrix factorization of this ensemble of spectra, together with distribution fitting, yields a reduced basis and stochastic coefficients potentially more tractable for forward propagation.