[ Moved from MW HS 0337 ]
Duration : 90 Minutes
Reproducibility has emerged as an issue in experimental work, and as a consequence, computational work is now being scrutinized for reproducibility. Predating emphasis on reproducibility, computational results have developed methodologies for verification and validation, and more recently with techniques based on probabilistic analysis that are grouped together as Uncertainty Quantification. This minisymposium will be a venue for these ideas and techniques applied to computation.
The overwhelming majority of modern applications in the natural sciences, engineering, and beyond require both statistical estimation to accurately quantify the behavior of unknown distributed parameters in complex systems as well as a means of making optimal decisions that are resilient to this uncertainty. In this minisymposium, we aim to connect researchers working in optimization of complex systems under uncertainty such as equilibrium problems, differential algebraic equations, and partial differential equations, with statisticians working in variational statistics, infinite-dimensional statistical estimation, and optimum experimental design.
This mini-symposium aims at bringing together people working on kernel and other sampling-based approximation methods for high-dimensional problems, in particular, but not restricted to, quasi-Monte Carlo methods, and sparse grid methods. Kernel methods and the related Gaussian Process surrogate models are a powerful class of numerical methods, and they are often employed in problems arising in uncertainty quantification. Nonetheless, there is much to be explored in their theoretical analysis for UQ applications, which are often formulated as high-dimensional approximation or integration problems.
On the other hand, the theory and applicability of QMC and sparse grid approximation/integration techniques in high or infinite dimensional problems have seen considerable advances in the last years, yet are far from addressing all problems of interest in UQ.
The objective of this mini-symposium is to showcase the latest theoretical results and exchange ideas on sampling-based high-dimensional integration and approximation methods targeting UQ applications.
In many scientific disciplines, researchers encounter inverse problems where observational data shall be used to calibrate mathematical models. Hadamard considered the solvability of such problems in terms of their "well-posedness". He called a problem well-posed, if a solution exists, if the solution is unique, and if the solution depends continuously on the data. Inverse problems are typically not well-posed (i.e., ill-posed) and require some regularization. Today's availability of high-performance computing has raised the popularity of statistical approaches to inverse problems and probabilistic regularizations; like the Bayesian approach.
In this minisymposium, we consider the robustness and non-robustness (that is, the brittleness) of Bayesian inverse problems and related approaches. This includes the robustness with respect to perturbations in the data (that is, the well-posedness), but also with respect to perturbations in the prior measure or the likelihood. Perturbations in the
prior also include a potential ill-specification of the prior model whereas perturbations in the likelihood include the replacement of the mathematical model by a discretised version or a surrogate. Moreover, we are interested in the robustness of algorithms used for statistical inversion, such as MCMC, particle filters, variational Bayes, and approximate Bayesian computation.
In many important inverse problems and engineering computations -e.g. numerical weather prediction, medical tomography, reliability analysis- data are related to parameters of interest through the solution of an ordinary or partial differential equation (DE). To proceed with computation, the DE must be discretised and solved through linear algebra methods. However, such discretisation introduces bias into parameter estimates and can in turn cause conclusions to be over-confident. Probabilistic numerical methods for DEs and linear algebra aim to provide uncertainty quantification in the solution space of the DE to properly account for the fact that the governing equations have been altered through discretisation. In contrast to the worst-case error bounds of classical numerical analysis, the stochasticity in DEs and linear solvers serves as the carrier of uncertainty about discretisation error and its impact. This statistical notion of discretisation uncertainty can then be more easily propagated to later inferences, e.g. in a Bayesian inverse problem. Several such probabilistic numerical methods have been developed in recent years, and the connections and distinctions between these methods are starting to be modelled and understood. In particular, an important challenge is to ensure that such uncertainty estimates are well-calibrated. This minisymposium will examine recent advances in both the development and implementation of probabilistic numerical methods in general.
Uncertainty quantification plays an increasingly important role in a wide range of problems in the physical sciences and financial markets. The underlying model may be subject to various uncertainties such as parameter or domain uncertainty, model uncertainty, numerical errors, or some intrinsic stochastic variability of the model. In the latter case, the uncertainty could be either introduced by measuring instruments or is the result of insufficient observations. For realistic simulations in the underlying differential equation model this is reflected via a random operator and/or random data. These parameters are often modelled as space-time Gaussian processes, leading to continuous random functions and thin-tailed, symmetric normal distributions.
Although Gaussian random objects have convenient analytical properties, for several applications, however, it might be favourable to model the stochastic quantities as discontinuous fields or processes which also allow for asymmetric and heavy-tailed distributions. In this minisymposium we bring together researchers whose foci are on stochastic or random partial differential equations which are influenced by discontinuous fields or processes.
Hydrological model simulations are often complicated by inevitable uncertainties in initial conditions, boundary conditions, and parameter fields. A proper identification and quantification of such uncertainties are nowadays a must for any modern hydrologist. In this mini symposium, beside presentations focusing on how uncertainty quantification can be properly performed for problems typical of hydrological sciences (e.g., flow and transport in porous media, river and karst spring discharge predictions, surface water-groundwater interaction…), we want to emphasize why uncertainty quantification is relevant in hydrology and its implication for engineering applications.
The minisimposium received funding from the International Graduate School of Science and Engineering of the Technical University of Munich.
Machine Learning (ML) has evolved into a core technology in many scientific applications. Solutions often require large labeled datasets to achieve high model accuracy. Unfortunately, this is a major bottleneck for many scientific computing applications, where numerical simulations are very expensive. Training on limited data can lead to significant uncertainties or errors when invoked outside the training space. But the fast execution of ML models once trained also make them ideal for exploring large numbers of runs for Uncertainty Quantification (UQ). Furthermore, many popular ML methods lack the needed mathematical support to prove robustness and reliability to motivate their use in scientific computing and uncertainty quantification UQ applications. This two-part mini-symposium will explore the interplay between ML and UQ, focusing in the following areas: (1) How do we leverage ML successes for scientific computing problems with uncertain inputs? (2) How do we use UQ methods to assess ML predictions and augment them with uncertainty estimates, error bounds, or prediction intervals? Addressing challenges in these areas will lead to greatly improve predictive capabilities. Methods that incorporate mathematical and scientific principles for uncertainty estimates in ML are needed. Literature in statistics can be leveraged for improving the model validation process and advances in UQ and V&V will greatly enhance the mathematical and scientific computing foundations for ML.
The cryosphere and the processes that force its evolution have profound and permanent effects on the global climate. In particular, Arctic amplification leads to extreme mid-latitude weather and glacier and ice sheet retreat is increasing global mean sea level causing the ocean to encroach onto coastal communities. Despite potentially devastating impacts, accurate predictions of future dynamics and rigorous characterizations of the associated uncertainty remain elusive. Misunderstood physics and computational limitations require complex physical processes to be parameterized and calibrated using noisy data that is sparse in both space and time. However, collecting data in remote polar regions is difficult, dangerous, and expensive. Therefore, we must leverage remote sensing techniques and wisely allocate limited resources. Finally, predictive uncertainties must be quantified to give meaningful error bounds on quantities of interest, such as future mean sea level. This session discusses recent advancements trying to understand the dynamic processes governing the cryosphere given observations and/or models as well as techniques to obtain and analyze data.
There is typically a mismatch between observations of a process, and its representation in a mathematical or numerical model. Such error arises because the model is incomplete or approximate, and errors are amplified by noise in the observations, as well as uncertain, or completely unknown, model states and parameters. In Earth science, errors of these types must be quantified, and a natural tool to do so is Bayesian inference, where errors are described via conditional probabilities defined for the model, its parameters, and the observations.
This mini-symposium will focus on the numerical solution of Bayesian inference problems in Earth sciences which are usually characterized by a large dimension (many parameters and states) and few observations (relative to the number of states and parameters). Moreover, Earth science applications require solutions to three types of Bayesian inference problems: state estimation (data assimilation), parameter estimation, and joint state and parameter estimation.
Our mini-symposium will showcase Bayesian inference "in action" in Earth science. It will provide an opportunity for interaction among applied mathematicians, interested in the numerics of Bayesian inference, and Earth scientists, who use Bayesian inference to break new ground in their respective fields.
Transport maps are deterministic couplings between probability measures with broad applications in uncertainty quantification and machine learning. They have been used for posterior sampling in Bayesian inference, for accelerating Markov chain Monte Carlo and importance sampling algorithms, and as building blocks of generative models and density estimation methods. More broadly, transport---including but not limited to optimal transport---provides an important mathematical foundation for many tools in machine learning and uncertainty quantification. The recent surge of interest in transport maps has been accompanied by efficient numerical methods that make constructing and learning such maps tractable in high dimensions and for large data sets. This minisymposium brings together researchers from uncertainty quantification and machine learning to discuss recent advances in theory, numerics, and applications of transport maps and related techniques.
Many modern simulators of physically realistic phenomena use multiple, heterogeneous sub-models, possibly involving different types of physical modelling and dimensionality. This usage poses many challenges that need to account for the links across sub-models: building surrogates, designing experiments, exploring sensitivities, reducing dimensions, etc. Both theoretical investigations and implementations are hampered by the complex nature of such models, and need to be tailored to the specific chain of models. In this mini-symposium, we present a series of talks that address such challenges and offer theoretical as well as practical solutions, together with illustrations. In particular, realistic models of geophysical and biological hazards often include feedbacks across sub-models or are combinations of sub-models of precursory phenomena – models that set the stage for dangerous events and can be informed by monitoring data – as well as the models of the hazardous phenomenon itself. These challenges require solutions that acknowledge the interactions across multi-physics components.
[ Moved from MW HS 2235 ]
The analysis and comparison of dynamic objects and deforming shapes is important in many real-world applications. Examples include wildfire front-tracking problems, impulse propagation in cardiac tissues, tumor growth, oil reservoir and spill simulations, and pollutant plume dispersion, just to name a few. There are several difficulties that can make the analysis a daunting task and hence need to be addressed: 1) the problem is subjected to uncertainty in the location of structures due to numerical errors, measurement noise, and/or intrinsic variations in the system; 2) strong shape deformations and topological changes may not be well captured at all scales; and 3) the notion of distance or similarity between objects can be characterized in various ways.
This situation has fostered a recent body of work focused on both analytical and computational developments in metric spaces. As an example, the Wasserstein metric has become an increasingly popular tool in such diverse fields as image processing, optimization, neural networks, seismic imaging, and numerical conservation laws. It opens up promising avenues for uncertainty quantification, Bayesian inference and data assimilation, where robust comparisons and mappings between different probability measures are often needed.
This MS will review recent advances, applications and remaining challenges of tailored metric spaces and similarity measures for structure-sensitive uncertainty quantification and inference problems.
Extreme events are short-lived episodes occurring due to exogenous causes or internal instabilities during which observables significantly depart from their mean values. A great deal of effort has been devoted to predicting and statistically quantifying extreme events because they can have catastrophic consequences (e.g., structural failure, rogue waves, extreme weather conditions, and market crashes). This is an arduous task because the systems that give rise to extreme events are most often highly complex and strongly nonlinear. This mini-symposium provides a venue to review the latest advances in the field.
In the last decades, the advancements in both computer hardware/architectures
and algorithms enabled numerical simulations at unprecedented scales. In parallel,
Uncertainty Quantification (UQ) evolved as a crucial task to enable predictive
numerical simulations. Therefore, a great effort has been devoted in advancing the UQ algorithms
in order to enable UQ for expensive numerical simulations, however the combination of an extremely
large computational cost associated to the evaluation of a high-fidelity model and the presence of a moderate/large
set of uncertainty parameters (often correlated to the complexity of the numerical/physical assumptions)
still represents a formidable challenge for UQ.
Multilevel and multifidelity strategies have been introduced to circumvent these difficulties by
reducing the computational cost required to perform UQ with high-fidelity simulations. The
main idea is to optimally combine simulations of increasingly resolution levels or model fidelities
in order to control the overall accuracy of the surrogates/estimators. This task is accomplished by
combining large number of less accurate numerical simulations with only a limited number of high-fidelity,
numerically expensive, code realizations. In this minisymposium we present contributions related to the state-of-the-art in both forward and inverse multilevel/multifidelity UQ and related areas as optimization under uncertainty.
In the last decades there has been renewed interest for Gaussian processes (GP) in statistics and machine learning. New challenges have arisen, especially in uncertainty quantification and optimization for complex systems. The case of continuous inputs has been intensively studied, and can be addressed with existing classes of GPs, such as isotropic (radial) kernels defined with the Euclidean distance. However, numerous applications involve more general non-Euclidean input spaces. This requires the definition of other GPs.
Fortunately, despite the diversity of situations, there are a few common techniques to define valid GPs, such as using a mapping to an Euclidean space. This mini-symposium aims at illustrating the variety of problems encountered along with their specific solutions, as well as the generic techniques. The first part, will focus on the case of discrete inputs in Gaussian process meta-modeling. By discrete input, we mean an input which has a finite number of levels, either ordered or not (it may also be called here “qualitative”, “categorical” or “factor” input). The second part, will present four other cases where the input space can be a permutation, time-varying, a probability distribution or a graph.
This minisymposium is devoted to recent developments in methodologies, applications, and lessons-learned in estimating physical parameters in complex physical systems. Mathematical models of complex real-world processes have been used to model physical processes of interest in science, engineering, medicine, and business. Computer models (or simulators) often require a set of inputs (some known and specified, others unknown) to generate predictions for physical processes of interest. Physical observations and simulator output allow us to infer both the unknown inputs and the physical process.
Inference about the physical process in the presence of the high-volume output and model uncertainty is challenging, since appropriate uncertainty assessment is the key success to understand the physical process of interest. In the calibration context, the discrepancy between reality and simulators are difficulty to model. In the inverse problem setting, the high-dimensional input space can make the Bayesian inverse computationally challenging.
Bringing selected leading researchers, this minisymposium has been broken into two sessions: calibration (Part I) and inverse problem (Part II). It includes speakers from Europe and North America and is diverse in experience level from fresh PhD graduates to mid-career researchers with backgrounds in statistics, applied mathematics, and engineering. We hope this minisymposium will serve as a nexus to exchange ideas to address this UQ problem.
Computational science is a driver of our society’s technological advancement, playing a key role for design, decision making and risk assessment. The ``extreme-scale'' computing era we are living in is enabling a paradigm shift: we no longer approach a problem with a few, target runs for specific choices of parameters and conditions, but we aim increasingly more at combining higher fidelity models with uncertainty quantification (UQ) methods to address, e.g., design optimization and parameter-space exploration. This approach allows us to discover rare events and critical behaviors of a target system, which is key information for high-consequence systems and cutting-edge engineering. If the system of interest is expensive to query, UQ can become impractical to complete within a reasonable amount of time. Reduced-order models (ROMs), due to their accuracy, computational efficiency and certification, constitute a promising technique to overcome this computational barrier, and make high-fidelity predictive simulations feasible for UQ. This mini-symposium aims at presenting recent advances in algorithms, software and applications in the context of reduced-order models and their broad impact for UQ. The talks will cover a broad range of applications, ranging from hypersonics to multiscale flows and plasma microturbulence.
Stochastic optimization is an effective approach to solve inverse problems, especially when traditional deterministic optimization methods fail or do not perform well. Important stochastic optimization methods include stochastic gradient descent, Bayesian inference, particle-based Monte Carlo sampling, and many more. With modern data collection techniques, a large amount of data is available as the input in inverse problems, which creates great needs of data driven optimization methods. In this mini-symposium, we focus on discussions of numerical methods related to data driven stochastic optimization and explore applications of data driven stochastic optimization methods in science and engineering.
This mini-symposium aims at bringing together people working on kernel and other sampling-based approximation methods for high-dimensional problems, in particular, but not restricted to, quasi-Monte Carlo methods, and sparse grids methods. Kernel methods and the related Gaussian Process surrogate models are a powerful class of numerical methods, and they are often employed in problems arising in uncertainty quantification. Nonetheless, there is much to be explored in their theoretical analysis for UQ applications, which are often formulated as high-dimensional approximation or integration problems.
On the other hand, the theory and applicability of QMC and sparse grid approximation/integration techniques in high or infinite dimensional problems have seen considerable advances in the last years, yet being far from addressing all problems of interest in UQ.
The objective of this mini-symposium is to showcase the late theoretical results and exchange ideas on sampling-based high dimensional integration and approximation methods targeting UQ applications.
This minisymposium targets research on adaptive and efficient sampling methods for heterogeneous problems that do not depend smoothly on random (model) parameters. Uncertainty quantification for such problems, especially those that exhibit discontinuities, are notoriously challenging to solve efficiently with existing methods. Due to the lack of regularity usually only sampling based methods remain a robust alternative. However, these methods may converge slowly, unless combined with suitable accelerating techniques such as variance reduction techniques. Even when this is done, such a method’s performance may still be reduced compared to when applied to smooth problems, as demonstrated for instance in the context of multi-level Monte Carlo methods for non-smooth functions [Schwab & Mishra, Krumscheid et al]. The challenges of heterogeneous problems have been demonstrated repeatedly for different classes of methods, including localized generalized polynomial chaos using wavelets [LeMaitre and
Knio] or multi-elements [Wan & Karniadakis]. Common to these methods is that they require a problem dependent adaptation of the sampling procedures in the vicinity of heterogeneous features. Here, adaptivity is to be understood in a wide sense, ranging from machine learning approaches for identifying parameterizations or response functions to discontinuity tracking. In this minisymposium we will discuss techniques that combine such adaptivity with efficient sampling algorithms.
In many important inverse problems and engineering computations -e.g. numerical weather prediction, medical tomography, reliability analysis- data are related to parameters of interest through the solution of an ordinary or partial differential equation (DE). To proceed with computation, the DE must be discretised and solved through linear algebra methods. However, such discretisation introduces bias into parameter estimates and can in turn cause conclusions to be over-confident. Probabilistic numerical methods for DEs and linear algebra aim to provide uncertainty quantification in the solution space of the DE to properly account for the fact that the governing equations have been altered through discretisation. In contrast to the worst-case error bounds of classical numerical analysis, the stochasticity in DEs and linear solvers serves as the carrier of uncertainty about discretisation error and its impact. This statistical notion of discretisation uncertainty can then be more easily propagated to later inferences, e.g. in a Bayesian inverse problem. Several such probabilistic numerical methods have been developed in recent years, and the connections and distinctions between these methods are starting to be modelled and understood. In particular, an important challenge is to ensure that such uncertainty estimates are well-calibrated. This minisymposium will examine recent advances in both the development and implementation of probabilistic numerical methods in general.
Hydrological model simulations are often complicated by inevitable uncertainties in initial conditions, boundary conditions, and parameter fields. A proper identification and quantification of such uncertainties are nowadays a must for any modern hydrologist. In this mini symposium, beside presentations focusing on how uncertainty quantification can be properly performed for problems typical of hydrological sciences (e.g., flow and transport in porous media, river and karst spring discharge predictions, surface water-groundwater interaction…), we want to emphasize why uncertainty quantification is relevant in hydrology and its implication for engineering applications.
The minisimposium received funding from the International Graduate School of Science and Engineering of the Technical University of Munich.
Machine Learning (ML) has evolved into a core technology in many scientific applications. Solutions often require large labeled datasets to achieve high model accuracy. Unfortunately, this is a major bottleneck for many scientific computing applications, where numerical simulations are very expensive. Training on limited data can lead to significant uncertainties or errors when invoked outside the training space. But the fast execution of ML models once trained also make them ideal for exploring large numbers of runs for Uncertainty Quantification (UQ). Furthermore, many popular ML methods lack the needed mathematical support to prove robustness and reliability to motivate their use in scientific computing and uncertainty quantification UQ applications. This two-part mini-symposium will explore the interplay between ML and UQ, focusing in the following areas: (1) How do we leverage ML successes for scientific computing problems with uncertain inputs? (2) How do we use UQ methods to assess ML predictions and augment them with uncertainty estimates, error bounds, or prediction intervals? Addressing challenges in these areas will lead to greatly improve predictive capabilities. Methods that incorporate mathematical and scientific principles for uncertainty estimates in ML are needed. Literature in statistics can be leveraged for improving the model validation process and advances in UQ and V&V will greatly enhance the mathematical and scientific computing foundations for ML.
Scientific and engineering models, which are generally partial differential equations (PDEs), often contain uncertain model parameters, initial conditions, and boundary conditions. These are often inferred by fitting to experimental or field observables. Bayesian inverse problems allow these unknowns to be modeled as random variables or fields and estimate a probability density for them. This is usually done via sampling, e.g., via a Markov chain Monte Carlo sampler. The probability density captures the uncertainty in the estimated quantities due to shortcomings of the model and sparsity of the data.
Computationally expensive PDE simulators do not allow their direct employment in samplers, and we often take recourse to statistical emulators. Training data for the emulators can be difficult to generate. We either reduce the dimensionality beforehand, or take recourse to sparse-grid sampling. Priors are generally known only as bounds, but arbitrary parameter combinations sampled from the resulting multidimensional uniform distributions may not be physically realistic and the PDE simulator may not even run.
We invite talks in dimensionality reduction, the construction of computationally inexpensive proxies of scientific/engineering simulators, strategies to fashion a physically realistic prior and other practical methods required to solve inverse problems of engineering/scientific interest. Examples where such methods have been used to solve inverse problems are also welcome.
There is typically a mismatch between observations of a process, and its representation in a mathematical or numerical model. Such error arises because the model is incomplete or approximate, and errors are amplified by noise in the observations, as well as uncertain, or completely unknown, model states and parameters. In Earth science, errors of these types must be quantified, and a natural tool to do so is Bayesian inference, where errors are described via conditional probabilities defined for the model, its parameters, and the observations. This mini-symposium will focus on the numerical solution of Bayesian inference problems in Earth sciences which are usually characterized by a large dimension (many parameters and states) and few observations (relative to the number of states and parameters). Moreover, Earth science applications require solutions to three types of Bayesian inference problems: state estimation (data assimilation), parameter estimation, and joint state and parameter estimation. Our mini-symposium will showcase Bayesian inference "in action" in Earth science. It will provide an opportunity for interaction among applied mathematicians, interested in the numerics of Bayesian inference, and Earth scientists, who use Bayesian inference to break new ground in their respective fields.
Transport maps are deterministic couplings between probability measures with broad applications in uncertainty quantification and machine learning. They have been used for posterior sampling in Bayesian inference, for accelerating Markov chain Monte Carlo and importance sampling algorithms, and as building blocks of generative models and density estimation methods. More broadly, transport---including but not limited to optimal transport---provides an important mathematical foundation for many tools in machine learning and uncertainty quantification. The recent surge of interest in transport maps has been accompanied by efficient numerical methods that make constructing and learning such maps tractable in high dimensions and for large data sets. This minisymposium brings together researchers from uncertainty quantification and machine learning to discuss recent advances in theory, numerics, and applications of transport maps and related techniques.
In recent years Bayesian inference has emerged as the most comprehensive and systematic framework for formulating and solving inverse problems with quantified uncertainties. However, the solution of Bayesian inverse problems governed by PDEs is extremely challenging; complex forward models or large parameter dimensions can make Bayesian inversion prohibitive with standard methods. In addition, often one has to solve a PDE-constrained optimization subproblem several times. Recent years have seen intensive efforts to develop advanced algorithms aimed at this class of problems; however, due to the complexity of the algorithms and potential dependencies on derivative information, they have remained buried in the literature and out of the reach of a broad community of scientists and engineers who solve inverse problems. The goal of this minisymposium is to present software frameworks that make advanced algorithms more accessible to domain scientists and provide an environment that expedites the development of new algorithms. These software frameworks can also be used as teaching tools that can be used to educate researchers and practitioners who are new to inverse problems, PDE-constrained optimization, the Bayesian inference framework and UQ in general.
[ Moved from MW HS 2235 ]
The analysis and comparison of dynamic objects and deforming shapes is important in many real-world applications. Examples include wildfire front-tracking problems, impulse propagation in cardiac tissues, tumor growth, oil reservoir and spill simulations, and pollutant plume dispersion, just to name a few. There are several difficulties that can make the analysis a daunting task and hence need to be addressed: 1) the problem is subjected to uncertainty in the location of structures due to numerical errors, measurement noise, and/or intrinsic variations in the system; 2) strong shape deformations and topological changes may not be well captured at all scales; and 3) the notion of distance or similarity between objects can be characterized in various ways.
This situation has fostered a recent body of work focused on both analytical and computational developments in metric spaces. As an example, the Wasserstein metric has become an increasingly popular tool in such diverse fields as image processing, optimization, neural networks, seismic imaging, and numerical conservation laws. It opens up promising avenues for uncertainty quantification, Bayesian inference and data assimilation, where robust comparisons and mappings between different probability measures are often needed.
This MS will review recent advances, applications and remaining challenges of tailored metric spaces and similarity measures for structure-sensitive uncertainty quantification and inference problems.
Extreme events are short-lived episodes occurring due to exogenous causes or internal instabilities during which observables significantly depart from their mean values. A great deal of effort has been devoted to predicting and statistically quantifying extreme events because they can have catastrophic consequences (e.g., structural failure, rogue waves, extreme weather conditions, and market crashes). This is an arduous task because the systems that give rise to extreme events are most often highly complex and strongly nonlinear. This mini-symposium provides a venue to review the latest advances in the field.
A key challenge associated with simulations and predictions of complex systems is to evaluate the quality of these datasets and the ability of the underlying model to reproduce physically relevant simulations. In statistics one way to quantitatively evaluate and rank models is statistical scoring. This is typically based on scalar metrics and takes as input verification data and output from the model to be evaluated. While evaluating model simulations or predictions, one aims to detect bias, trends, outliers, or correlation misspecification. Methods to evaluate the quality of unidimensional outputs are well established; however, issues remains related to score approximation and uncertainty. Additionally, the evaluation of multidimensional outputs or ensemble of outputs has been addressed in the literature relatively recently and remains challenging. We will discuss these challenges associated with evaluating unidimensional and multidimensional simulations or predictions.
In the last decades, the advancements in both computer hardware/architectures
and algorithms enabled numerical simulations at unprecedented scales. In parallel,
Uncertainty Quantification (UQ) evolved as a crucial task to enable predictive
numerical simulations. Therefore, a great effort has been devoted in advancing the UQ algorithms
in order to enable UQ for expensive numerical simulations, however the combination of an extremely
large computational cost associated to the evaluation of a high-fidelity model and the presence of a moderate/large
set of uncertainty parameters (often correlated to the complexity of the numerical/physical assumptions)
still represents a formidable challenge for UQ.
Multilevel and multifidelity strategies have been introduced to circumvent these difficulties by
reducing the computational cost required to perform UQ with high-fidelity simulations. The
main idea is to optimally combine simulations of increasingly resolution levels or model fidelities
in order to control the overall accuracy of the surrogates/estimators. This task is accomplished by
combining large number of less accurate numerical simulations with only a limited number of high-fidelity,
numerically expensive, code realizations. In this minisymposium we present contributions related to the state-of-the-art in both forward and inverse multilevel/multifidelity UQ and related areas as optimization under uncertainty.
In the last decades there has been renewed interest for Gaussian processes (GP) in statistics and machine learning. New challenges have arisen, especially in uncertainty quantification and optimization for complex systems. The case of continuous inputs has been intensively studied, and can be addressed with existing classes of GPs, such as isotropic (radial) kernels defined with the Euclidean distance. However, numerous applications involve more general non-Euclidean input spaces. This requires the definition of other GPs.
Fortunately, despite the diversity of situations, there are a few common techniques to define valid GPs, such as using a mapping to an Euclidean space. This mini-symposium aims at illustrating the variety of problems encountered along with their specific solutions, as well as the generic techniques. The first part, will focus on the case of discrete inputs in Gaussian process meta-modeling. By discrete input, we mean an input which has a finite number of levels, either ordered or not (it may also be called here “qualitative”, “categorical” or “factor” input). The second part, will present four other cases where the input space can be a permutation, time-varying, a probability distribution or a graph.
This minisymposium is devoted to recent developments in methodologies, applications, and lessons-learned in estimating physical parameters in complex physical systems. Mathematical models of complex real-world processes have been used to model physical processes of interest in science, engineering, medicine, and business. Computer models (or simulators) often require a set of inputs (some known and specified, others unknown) to generate predictions for physical processes of interest. Physical observations and simulator output allow us to infer both the unknown inputs and the physical process.
Inference about the physical process in the presence of the high-volume output and model uncertainty is challenging, since appropriate uncertainty assessment is the key success to understand the physical process of interest. In the calibration context, the discrepancy between reality and simulators are difficulty to model. In the inverse problem setting, the high-dimensional input space can make the Bayesian inverse computationally challenging.
Bringing selected leading researchers, this minisymposium has been broken into two sessions: calibration (Part I) and inverse problem (Part II). It includes speakers from Europe and North America and is diverse in experience level from fresh PhD graduates to mid-career researchers with backgrounds in statistics, applied mathematics, and engineering. We hope this minisymposium will serve as a nexus to exchange ideas to address this UQ problem.