Numerous Earth-observing satellites provide high-resolution and high-volume data that facilitate scientific inference on physical and environmental processes. Most remote sensing data products used for scientific investigations are often subject to multiple stages of processing before they are widely used, and the scientific utility of these data products critically depends on a comprehensive assessment of the sources of uncertainty encountered in these stages of processing. One key stage involves the use of a retrieval algorithm to infer a geophysical quantity of interest from a satellite’s observed intensity of radiation.
The retrieval is an inverse problem that has been implemented mathematically and computationally in numerous ways for different satellite missions. Several of the presentations in this mini-symposium will each highlight an individual Earth-observing satellite and its retrieval methodology, emphasizing important contributions to uncertainty in retrieval data products. Methodological developments that interrogate the joint distribution of true geophysical states, retrieved states, and observed satellite spectra will be introduced. The presentations will span multiple Earth science applications, including weather and climate, the carbon cycle, air quality, atmospheric chemistry, and ecosystem health.
This mini-symposium aims at bringing together people working on kernel and other sampling-based approximation methods for high-dimensional problems, in particular, but not restricted to, quasi-Monte Carlo methods, and sparse grids methods. Kernel methods and the related Gaussian Process surrogate models are a powerful class of numerical methods, and they are often employed in problems arising in uncertainty quantification. Nonetheless, there is much to be explored in their theoretical analysis for UQ applications, which are often formulated as high-dimensional approximation or integration problems.
On the other hand, the theory and applicability of QMC and sparse grid approximation/integration techniques in high or infinite dimensional problems have seen considerable advances in the last years, yet being far from addressing all problems of interest in UQ.
The objective of this mini-symposium is to showcase the late theoretical results and exchange ideas on sampling-based high dimensional integration and approximation methods targeting UQ applications.
Partial differential equations are a versatile tool to model and
eventually simulate physical phenomena. An important aspect in view
of the reliability and relevance of such simulations are uncertainties
arising from unknown parameters and measurement errors. In particular,
the modelling and discretization of uncertainties of the computational
domain requires special care. Such uncertainties emerge in a natural
fashion when considering products fabricated by line production which
are subject to manufacturing tolerances or shapes which are obtained by
remote sensing techniques, like e.g. ultrasound or magnetic resonance imaging.
This minisymposium is dedicated to recent developments in the numerical
treatment of shape uncertainties in partial differential equations
and welcomes contributions addressing analytical aspects,
forward modelling, assimilation of measurement data,
optimization, and applications.
In this session we concentrate on the latest research insights for uncertainty quantification in transport problems and high-dimensional systems under structural uncertainties, with focus on kinetic and hyperbolic PDEs and multiscale interacting particle systems.
Data driven discovery is the modern trend of science. A plethora of developed models are dedicated in analyzing or assimilating data arising from problems in material science and chemistry to national defense and health. The proposed mini-symposium will focus on the uncertainty of data, and its speakers will discuss techniques of uncertainty quantification, parameter estimation and noise in complex data so that robust, reproducible and convergent results are propagated. By the same token, audience and speakers will benefit from a dynamic set of prominent and auspicious speakers with heterogeneous backgrounds spanning almost the entire spectrum of mathematical sciences, from topology and geometry to statistics and machine learning.
Duality between data assimilation/nonlinear filtering and optimal control has a rich history tracing back to Kalman-Bucy’s original 1961 paper. Duality is manifested in many guises, e.g., with the time arrow reversed, the Riccati equation of optimal control is the same as equation for the covariance update equation of the Kalman filter. In recent years, the duality relationship has been used to derive control type algorithms for data assimilation and simulation problems. This has led to several new classes of control-type algorithms such as (i) nonlinear smoothers based on approximate solution of the Bellman’s equation of optimal control; (ii) forward-backward algorithms based on a Schrodinger bridge-type construction; (iii) feedback particle filter based on a diffusion map approximation of the solution of a certain Poisson equation; and (iv) gradient flow type interpretations of linear and nonlinear filters. In numerical evaluations, it is often found that these control algorithms exhibit smaller simulation variance and better scaling properties with problem dimension when compared to the traditional methods based on importance sampling.
This session will serve to provide a snapshot of some of the exciting news developments in this historically significant area.
Reliability analysis and risk assessment for complex physical and
engineering systems governed by partial differential equations (PDEs)
are computationally intensive, especially when high-dimensional random
parameters are involved. Since standard numerical schemes for solving
these complex PDEs are expensive, traditional Monte Carlo methods
which require repeatedly solving PDEs are infeasible. Alternative
approaches which are typically the surrogate based methods suffer from
the so-called ``curse of dimensionality'', which limits their
application to problems with high-dimensional parameters. The purpose
of this mini-symposium is to bring researchers from different fields
to discuss the recent machine learning methods for such problems,
focusing on both novel machine learning surrogates and alternative
Monte Carlo methods.
Uncertainty quantification plays a significant role in computational energy research. For instance, physical models of new energy storage are helpful tools in electromobility applications but may suffer from random inputs. Other relevant examples are renewable energy units and reliable energy network systems under volatile sources. The minisymposium covers the broad field of computational methods for energy and power systems with a particular focus on efficient methods for uncertainty quantification and sensitivity analysis. The primary purpose is to identify common methodologies and interfaces. Contributions will address applications of current interest as well as efficient algorithms and their mathematical background.
Analysis and modeling under uncertainty are increasingly critical for robust scientific simulations. Physics-based model simulations cannot resolve the mathematical model exactly, typically leaving out fine scales, which are either approximated or not represented. This results in uncertainties in their outputs that need to be characterized. A variety of stochastic methods have been developed to address these errors and uncertainty in order to better describe complex systems. In this symposium we discuss new developments in sub-grid stochastic models, multiscale aspects, model reduction techniques, and the effect they have on Bayesian inversion and data assimilation applications.
Increasingly refined numerical models that depend on a large number of parameters introduce challenges with respect to the computability and interpretability of the generated data. Methods of uncertainty quantification and sensitivity analysis offer ways to identify relevant parameters for the construction of reduced complexity models. In a well-defined parameter range this ultimately allows to replace the original model by a possibly probabilistic fast surrogate and can provide insight into the main dependencies within their range of uncertainty. The present session focuses on the development and application of such techniques that are useful for a variety of model classes from different fields. In particular this includes Bayesian methods and the interplay of uncertainty quantification with surrogates and low-fidelity models as well as methods from machine learning. Application cases ranging from environmental science and biomechanics to plasma physics will demonstrate features and limitations. They will also show similarities and differences to be taken into account for techniques that aim to be widely applicable.
Quantifying uncertainty in finance is a major concern when one has to address properly risk management issues with uncertainty with respect to the model and to its inputs, for instance.
In this session, we have selected different up-to-date contributions: how to derive a metamodel in credit risk, where a direct sampling of the loss is quite time-consuming (because of large number of obligors)? how to use to cleverly interpolate financial quantities (interest rate curve, implied volatility surface) with kriging techniques accounting for arbitrage-free conditions? how to account for uncertain model (with a prior distribution on the copula dependency) to compute extreme losses in capital allocation problems? or to address asset management problems (portfolio optimisations)?
The tools cover Gaussian processes, MCMC, splitting, Polynomial Chaos Expansion, Stochastic Approximations.
[ Moved from MW HS 2235 ]
History matching is a way of inverse modelling, or calibrating/tuning, the inputs of a complex numerical model given observations on the outputs. History matching is very different to ways of performing a Bayesian calibration. For example, the result is not a posterior distribution on the model inputs, but a set of model input points that are not implausible points given the data. It is not probabilistic. The idea is simple. A series of waves of model runs is carried out. At each wave the scaled distance (the implausibility measure) between the observations and the expected value of an emulator (either a Gaussian or a second order process) of the model for all inputs is calculated. If this distance is too large the set of inputs is ruled implausible. The scaling consists of three components:- the emulator variance (known but input dependent), the observation variance (so poor observations are downweighted compared to more accurate ones) and a variance that measures the discrepancy between the model and the real world. After the first wave a new wave of model runs is carried out in the Not Ruled Out Yet (NROY) space. A new emulator is derived and new implausibilities calculated. At each wave the emulator becomes a better fit to the model so the NROY space is progressively reduced. This mini-symposium is concerned with the application and extension of history matching to a variety of applications.
The mini deals with state of the art reduced order methods for uncertainty quantification in parametric computational fluid dynamics (CFD) problems dealing with data assimilation, data reconstruction, random inputs. Special attention is devoted to inverse problems for optimisation and control, as well as to nonlinear problems. Complex applications of the methodology are considered in industrial setting, as well as in medicine.
One of the most critical hypothesis in uncertainty quantification studies is the choice of the distributions of uncertain input variables which are propagated through the numerical model. In general, such pdf come from various sources (statistical inference, design or operation rules, expert judgment, calibration, etc.), and are then established with a certain level of accuracy or confidence. Moreover, in many applications, related for example to industrial safety, engineers are not able to assign a given probability distribution to some of the inputs. This happens for example for inputs corresponding to physical parameters for which no data are available.
Hence, bringing stringent justifications to the overall approach requires quantifying the impact of the pdf modeling assumptions on the quantity of interest (QoI). In this context, the “input pdf robustness analysis”, has been recently defined as a particular setting of the sensitivity analysis domain (like the screening one or the quantitative partitioning one). Various QoI can be considered, as the mean of the model output, its variance, a probability that the output exceeds a threshold, a quantile of the output or even sensitivity indices.
This Minisymposium, which will be held in two parts (4 presentations in each part), aims at presenting several recent theoretical developments on this subject, as well as practical and industrial issues.
Deep learning techniques are becoming the center of attention across many scientific disciplines. Many predictive tasks are currently being tackled using over-parameterized, black-box discriminative models such as deep neural networks, in which interpretability and robustness is often sacrificed in favor of flexibility in representation and scalability in computation. Such models have yielded remarkable results in data-rich domains, yet their effectiveness in data-scarce and risk-sensitive tasks still remains questionable, primarily due to open challenges in statistical inference and uncertainty quantification. This mini-symposium invites contributions on uncertainty quantification methods for deep learning and their application in the physical and engineering sciences. Topics include (but are not limited to) Bayesian neural networks, deep generative models, posterior inference techniques, and applications to forward/inverse problems, active learning, Bayesian optimization and reinforcement learning.
With the ever increasing importance of UQ in various disciplines and fields, software solutions and libraries for UQ problems get more and more important. Progress and use of UQ techniques relies on the availability of software features and support. This raises interesting questions for the UQ community such as: What are the current properties of available tools? For which classes of problems have they been developed? What methods or algorithms do they provide? What are challenges for UQ software and which resources are required? What are recent improvements? What are the next steps and the long-term goals of the development?
This minisymposium brings together experts for different software in the context of UQ, ranging from tools that ease up individual tasks of UQ (such as surrogate modelling, UQ workflows, dimensionality reduction, data augmentation) up to whole frameworks for solving UQ problems. The minisymposium will foster discussion and exchange of ideas between developers and (prospective) users.
The behaviour of many large-scale systems can be modelled by a network of pairwise interaction. Examples include spread of epidemics, neural activity in the brain and social media influence. While these models are relatively flexible, their complexity strongly depends both on the structural properties of networks and the precise nature of the process unfolding on them. The complete specification of such models require various amount and quality of information. In the majority of situations, however, such data sets are incomplete and contain errors. Furthermore, pairwise interactions on networks in many instances are hidden from us and are impossible or very difficult to measure directly. Problems of interest in such situations include quantifying the effect of errors and omissions in the data on the predictability of the behaviour of the process unfolding on large-scale networks, and the inference of the underlying structure from partial, erroneous and usually indirect observations. In this symposium we consider different approaches to such problems. This includes approximation of large scale networks through statistical averaging techniques and Bayesian inference of the network structure.
Computer models play an essential role in forecasting complicated phenomena such as the atmosphere, ocean dynamics, seismology among others. These models, however, are typically imperfect due to various sources of uncertainty. Measurements are snapshots of reality that are collected as an additional source of information and are used to update and even correct the model-based simulations or forecasts. The accuracy of the overall simulations and model-based forecasts is greatly influenced by the quality of the observational grid design used to collect measurements. Optimal data acquisition can be formulated as an optimal experimental design (OED) problem. The framework of model-based OED has gained wide popularity and attention from researchers in various fields in statistics, engineering, applied math and others. Challenges in model-based OED include high-dimensionality, misrepresentation of prior knowledge, increasing deviation from Gaussianity, high correlations of spatiotemporal observations, among others. This minisymposium aims to showcase the latest developments in tackling the challenges in the field of model-based OED for large-scale inverse problems.
Recent years have seen the flourishing of techniques devoted to best incorporate data in the models, either for the solution of inverse problems or for approximation purposes. This includes domain-aware Machine Learning techniques, dynamic mode decomposition or data driven model order reduction methods. This minisymposium aims to provide a venue for young researchers focusing on the theoretical analysis, the development and the application of these methodologies.
Numerous Earth-observing satellites provide high-resolution and high-volume data that facilitate scientific inference on physical and environmental processes. Most remote sensing data products used for scientific investigations are often subject to multiple stages of processing before they are widely used, and the scientific utility of these data products critically depends on a comprehensive assessment of the sources of uncertainty encountered in these stages of processing. One key stage involves the use of a retrieval algorithm to infer a geophysical quantity of interest from a satellite’s observed intensity of radiation.
The retrieval is an inverse problem that has been implemented mathematically and computationally in numerous ways for different satellite missions. Several of the presentations in this mini-symposium will each highlight an individual Earth-observing satellite and its retrieval methodology, emphasizing important contributions to uncertainty in retrieval data products. Methodological developments that interrogate the joint distribution of true geophysical states, retrieved states, and observed satellite spectra will be introduced. The presentations will span multiple Earth science applications, including weather and climate, the carbon cycle, air quality, atmospheric chemistry, and ecosystem health.
Bayesian inverse problems of complex systems are usually intractable, mainly due to the expansive forward simulations of the system. In practice, both derivative-free methods (e.g., ensemble Kalman inversion) and fast adjoint methods serve as promising candidates of optimization scheme to approximately solve Bayesian inverse problems for complex systems. In this session, we include talks of solving inverse problems of complex systems by using either ensemble Kalman methods or fast adjoint method. With the forward simulations evaluated in the optimization scheme, it is possible to further build surrogate models for MCMC. In several talks of this session, physics-informed approaches are also discussed in the context of Bayesian inverse problems. On the other hand, high-fidelity data of complex systems are expensive to simulate or measure, making experimental design process critical in order to obtain the most information from the system under limited resources. This experimental design topic is also discussed in this session.
Partial differential equations are a versatile tool to model and eventually simulate physical phenomena. An important aspect in view of the reliability and relevance of such simulations are uncertainties arising from unknown parameters and measurement errors. In particular, the modelling and discretization of uncertainties of the computational domain requires special care. Such uncertainties emerge in a natural fashion when considering products fabricated by line production which are subject to manufacturing tolerances or shapes which are obtained by remote sensing techniques, like e.g. ultrasound or magnetic resonance imaging. This minisymposium is dedicated to recent developments in the numerical treatment of shape uncertainties in partial differential equations and welcomes contributions addressing analytical aspects, forward modelling, assimilation of measurement data, optimization, and applications.
In this session we concentrate on the latest research insights for uncertainty quantification in transport problems and high-dimensional systems under structural uncertainties, with focus on kinetic and hyperbolic PDEs and multiscale interacting particle systems.
Data driven discovery is the modern trend of science. A plethora of developed models are dedicated in analyzing or assimilating data arising from problems in material science and chemistry to national defense and health. The proposed mini-symposium will focus on the uncertainty of data, and its speakers will discuss techniques of uncertainty quantification, parameter estimation and noise in complex data so that robust, reproducible and convergent results are propagated. By the same token, audience and speakers will benefit from a dynamic set of prominent and auspicious speakers with heterogeneous backgrounds spanning almost the entire spectrum of mathematical sciences, from topology and geometry to statistics and machine learning.
Accuracy is always at odds with efficiency in the context of Data Assimilation on complex dynamical system. Such systems often involve large amounts of variables, with impactful non-linearities, and poorly understood stochastic behaviour. Tackling these problems in an efficient manner is the key to unlocking the next generation of algorithms. The discussion of directions such as exploiting the time-dependent structure of natural systems, reduced order modeling, accounting for model error, and efficient ways to solve the underlying optimization problem, are just some of the topics of fundamental importance in the next few years of research, that will be covered.
Reliability analysis and risk assessment for complex physical and
engineering systems governed by partial differential equations (PDEs)
are computationally intensive, especially when high-dimensional random
parameters are involved. Since standard numerical schemes for solving
these complex PDEs are expensive, traditional Monte Carlo methods
which require repeatedly solving PDEs are infeasible. Alternative
approaches which are typically the surrogate based methods suffer from
the so-called ``curse of dimensionality'', which limits their
application to problems with high-dimensional parameters. The purpose
of this mini-symposium is to bring researchers from different fields
to discuss the recent machine learning methods for such problems,
focusing on both novel machine learning surrogates and alternative
Monte Carlo methods.
Historically, design and analysis of computer experiments focused on deterministic solvers from the physical sciences via Gaussian process (GP) interpolation. But nowadays computer modeling is common in the social, management and biological sciences, where stochastic simulations abound. In this minisymposium, we bring together a selection of researchers in the areas of statistical surrogate modeling, active learning, and Bayesian optimization of stochastic computer model, simulation campaigns, and high volume observational studies. Noisier simulations demand bigger experiments to isolate signal from noise, and more sophisticated GP models -- such as adding a variance processes to track changes in noise throughout the input space in the face of heteroskedasticity. Appropriate surrogate modeling is key to the propagation of uncertainty to decision criteria underlying important large-scale and real time control of systems which rely on expensive simulation campaigns. Think of synthesis between off-line simulation of urban road traffic and ride demand with on-line measurements from potential riders and their routes in a rideshare pool. Or similarly the combination of limited data on disease spread combined with social-network backed simulation of epidemiological dynamics and entertainment of intervention strategies such as vaccination and quarantine. The talks will be on these methodologies and applied in those challenging modeling and optimization real-world problems.
In this minisymposium, we explore the symbiotic relationship between computational statistics and computational dynamics. The interaction of the two fields have long been established. Efficiently computing statistics of dynamical quantities is of interest in science and engineering, and cleverly constructed dynamical systems are used to sample from high-dimensional probability distributions. We will highlight recent advances in numerical methods that utilize tools in one field to solve problems in the other in a novel fashion. We will exhibit new algorithms for sensitivity analysis, efficient sampling methods, and inference.
The synthesis of various information sources, including a priori domain knowledge, statistical assumptions, field data, etc., large-scale numerical models is one of the key steps in building interpretable and predictive models for supporting critical decisions in science, engineering, medicine, and beyond. Typical examples can be found in oil/gas reservoir modeling, treatment of saltwater intrusion, medical imaging, tumor treatment, aircraft design. Because of the computationally costly nature of the numerical models and stringent requirements on the accuracy of the statistical learning outcomes, multilevel and multi-fidelity methods provide a viable route for solving these model-based statistical learning tasks. This mini-symposium will bring together researchers working on the forefront of multilevel and multi-fidelity methods (and other relevant methods) intended to accelerate model-based statistical learning tasks.
Rough volatility models are an increasingly popular class of models in quantitative finance. In contrast to conventional stochastic volatility models, the volatility is driven by a fractional Brownian motion with Hurst index H < 1/2 which is rougher than Brownian motion. This change greatly improves the fit to time series data of underlying asset prices as well as to option prices, see, for instance, [Bayer, Friz, Gatheral, Quantitative Finance 16(6), 887-904, 2016]. Hence, introducing non-Markovian noise improves the predictive power of the model while maintaining parsimoniousness. Unfortunately, the loss of the Markov property poses severe challenges for theoretical and numerical analyses as well as for computational practice.
This minisymposium brings together different approaches for various UQ tasks in the context of rough volatility models and predictive models in finance. The problems addressed range from calibration and statistical analysis of the model parameters to optimal control of rough volatility models. To overcome the considerable practical hurdles posed by the lack of Markovianity, the contributions to the minisymposium use diverse tools such as deep neural networks and large deviation theory, assisted by properly analyzed simulation techniques.
[ Moved from MW HS 2235 ]
History matching is a way of inverse modelling, or calibrating/tuning, the inputs of a complex numerical model given observations on the outputs. History matching is very different to ways of performing a Bayesian calibration. For example, the result is not a posterior distribution on the model inputs, but a set of model input points that are not implausible points given the data. It is not probabilistic. The idea is simple. A series of waves of model runs is carried out. At each wave the scaled distance (the implausibility measure) between the observations and the expected value of an emulator (either a Gaussian or a second order process) of the model for all inputs is calculated. If this distance is too large the set of inputs is ruled implausible. The scaling consists of three components:- the emulator variance (known but input dependent), the observation variance (so poor observations are downweighted compared to more accurate ones) and a variance that measures the discrepancy between the model and the real world. After the first wave a new wave of model runs is carried out in the Not Ruled Out Yet (NROY) space. A new emulator is derived and new implausibilities calculated. At each wave the emulator becomes a better fit to the model so the NROY space is progressively reduced. This mini-symposium is concerned with the application and extension of history matching to a variety of applications.
Advances in computational medicine have made mathematical modeling of hemodynamics a key area of scientific research. Innovations in high performance computing and high-fidelity models allows for sophisticated approximations of in-vivo cardiovascular dynamics. To this end, a variety of models including system level 0D models, 1D fluid dynamics network models, and 3D fluid structure interaction models, can be used to investigate structure-function relation of the cardiovascular system, on a local, global, or multiscale level. However, these computational models are susceptible to both model discrepancy and uncertainty in model inputs, and predictions. Cardiovascular models are calibrated to sparse data, i.e. they contain parameters unmeasurable in-vivo, making parameter estimation and forward uncertainty propagation difficult. This minisymposium will focus on cardiovascular inverse problems and statistical inference methodology including:
• Parameter estimation techniques for complex ODE-PDE coupled models
• Novel emulation and metamodeling procedures for high-fidelity models
• Advances in surrogate and low-fidelity model construction
• Quantification of model consistency using machine-learning
• Efficient uncertainty propagation and quantification
• Innovative numerical and analytical sensitivity techniques
One of the most critical hypothesis in uncertainty quantification studies is the choice of the distributions of uncertain input variables which are propagated through the numerical model. In general, such pdf come from various sources (statistical inference, design or operation rules, expert judgment, calibration, etc.), and are then established with a certain level of accuracy or confidence. Moreover, in many applications, related for example to industrial safety, engineers are not able to assign a given probability distribution to some of the inputs. This happens for example for inputs corresponding to physical parameters for which no data are available.
Hence, bringing stringent justifications to the overall approach requires quantifying the impact of the pdf modeling assumptions on the quantity of interest (QoI). In this context, the “input pdf robustness analysis”, has been recently defined as a particular setting of the sensitivity analysis domain (like the screening one or the quantitative partitioning one). Various QoI can be considered, as the mean of the model output, its variance, a probability that the output exceeds a threshold, a quantile of the output or even sensitivity indices.
This Minisymposium, which will be held in two parts (4 presentations in each part), aims at presenting several recent theoretical developments on this subject, as well as practical and industrial issues.
Deep learning techniques are becoming the center of attention across many scientific disciplines. Many predictive tasks are currently being tackled using over-parameterized, black-box discriminative models such as deep neural networks, in which interpretability and robustness is often sacrificed in favor of flexibility in representation and scalability in computation. Such models have yielded remarkable results in data-rich domains, yet their effectiveness in data-scarce and risk-sensitive tasks still remains questionable, primarily due to open challenges in statistical inference and uncertainty quantification. This mini-symposium invites contributions on uncertainty quantification methods for deep learning and their application in the physical and engineering sciences. Topics include (but are not limited to) Bayesian neural networks, deep generative models, posterior inference techniques, and applications to forward/inverse problems, active learning, Bayesian optimization and reinforcement learning.
With the ever increasing importance of UQ in various disciplines and fields, software solutions and libraries for UQ problems get more and more important. Progress and use of UQ techniques relies on the availability of software features and support. This raises interesting questions for the UQ community such as: What are the current properties of available tools? For which classes of problems have they been developed? What methods or algorithms do they provide? What are challenges for UQ software and which resources are required? What are recent improvements? What are the next steps and the long-term goals of the development?
This minisymposium brings together experts for different software in the context of UQ, ranging from tools that ease up individual tasks of UQ (such as surrogate modelling, UQ workflows, dimensionality reduction, data augmentation) up to whole frameworks for solving UQ problems. The minisymposium will foster discussion and exchange of ideas between developers and (prospective) users.
Reproducing kernel Hilbert spaces are ubiquitous in applied mathematics and statistics due to the tractability provided by the reproducing property. They commonly underpin the theoretical analysis of stochastic processes (including Gaussian processes) and have historically been used to construct a variety of numerical schemes for interpolation, integration or solving differential equations. Additionally, within the machine learning literature, the last decade has seen fruitful research into the use of kernels in statistical tests, statistical estimators and sampling methods.
In this two-part mini-symposium, we propose to explore these more recent works and highlight their relevance to uncertainty quantification. The first session will focus on the use of kernel-based probability metrics and statistical divergences to construct statistical estimators and hypothesis tests for high-dimensional models or models with intractable likelihoods. The second session will focus on applications of kernels to problems in Monte Carlo methods and approximation of probability measures.
Computer models play an essential role in forecasting complicated phenomena such as the atmosphere, ocean dynamics, seismology among others. These models, however, are typically imperfect due to various sources of uncertainty. Measurements are snapshots of reality that are collected as an additional source of information and are used to update and even correct the model-based simulations or forecasts. The accuracy of the overall simulations and model-based forecasts is greatly influenced by the quality of the observational grid design used to collect measurements. Optimal data acquisition can be formulated as an optimal experimental design (OED) problem. The framework of model-based OED has gained wide popularity and attention from researchers in various fields in statistics, engineering, applied math and others. Challenges in model-based OED include high-dimensionality, misrepresentation of prior knowledge, increasing deviation from Gaussianity, high correlations of spatiotemporal observations, among others. This minisymposium aims to showcase the latest developments in tackling the challenges in the field of model-based OED for large-scale inverse problems.
Recent years have seen the flourishing of techniques devoted to best incorporate data in the models, either for the solution of inverse problems or for approximation purposes. This includes domain-aware Machine Learning techniques, dynamic mode decomposition or data driven model order reduction methods. This minisymposium aims to provide a venue for young researchers focusing on the theoretical analysis, the development and the application of these methodologies.
Numerous Earth-observing satellites provide high-resolution and high-volume data that facilitate scientific inference on physical and environmental processes. Most remote sensing data products used for scientific investigations are often subject to multiple stages of processing before they are widely used, and the scientific utility of these data products critically depends on a comprehensive assessment of the sources of uncertainty encountered in these stages of processing. One key stage involves the use of a retrieval algorithm to infer a geophysical quantity of interest from a satellite’s observed intensity of radiation.
The retrieval is an inverse problem that has been implemented mathematically and computationally in numerous ways for different satellite missions. Several of the presentations in this mini-symposium will each highlight an individual Earth-observing satellite and its retrieval methodology, emphasizing important contributions to uncertainty in retrieval data products. Methodological developments that interrogate the joint distribution of true geophysical states, retrieved states, and observed satellite spectra will be introduced. The presentations will span multiple Earth science applications, including weather and climate, the carbon cycle, air quality, atmospheric chemistry, and ecosystem health.
The MS focuses on the process of modeling, quantifying and estimating the effects of uncertainties that characterize irreversible/dissipative material behavior in quasi-static and dynamic conditions. Particular examples of great significance include metal fatigue and concrete fracture analysis, as well as material aging of bone tissues. Moreover, special attention will be paid to the multi-scale and multi-fidelity nature of these problems, as well as to Bayesian analysis and corresponding design of experiments. Numerical tools to be discussed are low-rank functional approximations, Bayesian learning, optimization, stochastic Galerkin, polynomial chaos expansion, and stochastic homogenization, to name just a few.
Partial differential equations are a versatile tool to model and eventually simulate physical phenomena. An important aspect in view of the reliability and relevance of such simulations are uncertainties arising from unknown parameters and measurement errors. In particular, the modelling and discretization of uncertainties of the computational domain requires special care. Such uncertainties emerge in a natural fashion when considering products fabricated by line production which are subject to manufacturing tolerances or shapes which are obtained by remote sensing techniques, like e.g. ultrasound or magnetic resonance imaging. This minisymposium is dedicated to recent developments in the numerical treatment of shape uncertainties in partial differential equations and welcomes contributions addressing analytical aspects, forward modelling, assimilation of measurement data, optimization, and applications.
In this session we concentrate on the latest research insights for uncertainty quantification in transport problems and high-dimensional systems under structural uncertainties, with focus on kinetic and hyperbolic PDEs and multiscale interacting particle systems.
Accuracy is always at odds with efficiency in the context of Data Assimilation on complex dynamical system. Such systems often involve large amounts of variables, with impactful non-linearities, and poorly understood stochastic behaviour. Tackling these problems in an efficient manner is the key to unlocking the next generation of algorithms. The discussion of directions such as exploiting the time-dependent structure of natural systems, reduced order modeling, accounting for model error, and efficient ways to solve the underlying optimization problem, are just some of the topics of fundamental importance in the next few years of research, that will be covered.
Characterization and prediction of rare and extreme events that correspond to large excursions is of
central importance in several applications. Important examples can be found in natural phenomena
such as climate, weather, oceanography and engineering systems such as structures, power grids, etc.
Accurate characterization and reliable prediction of these events allows for realistic balancing of
risks and costs in complex and expensive infrastructure. Two important challenges related to these
rare and extreme events are i) limited availability of data that correspond to these rare and extreme
events, leads to difficulty in quantifying tail properties for the relevant distribution and ii)
determining the statistics of level crossings and durations of temporally or spatially correlated
processes. The aim of this MS is to present research that address these two general problems. Approaches based on, but not restricted to, data, dynamics, or a combination of both will be discussed.
Historically, design and analysis of computer experiments focused on deterministic solvers from the physical sciences via Gaussian process (GP) interpolation. But nowadays computer modeling is common in the social, management and biological sciences, where stochastic simulations abound. In this minisymposium, we bring together a selection of researchers in the areas of statistical surrogate modeling, active learning, and Bayesian optimization of stochastic computer model, simulation campaigns, and high volume observational studies. Noisier simulations demand bigger experiments to isolate signal from noise, and more sophisticated GP models -- such as adding a variance processes to track changes in noise throughout the input space in the face of heteroskedasticity. Appropriate surrogate modeling is key to the propagation of uncertainty to decision criteria underlying important large-scale and real time control of systems which rely on expensive simulation campaigns. Think of synthesis between off-line simulation of urban road traffic and ride demand with on-line measurements from potential riders and their routes in the assignment of a car. Or similarly the combination of limited data on disease spread combined with social-network backed simulation of epidemiological dynamics and entertainment of intervention strategies such as vaccination and quarantine. The talks will be on these methodologies and applied in those challenging modeling and optimization real-world problems.
In this minisymposium, we explore the symbiotic relationship between computational statistics and computational dynamics. The interaction of the two fields have long been established. Efficiently computing statistics of dynamical quantities is of interest in science and engineering, and cleverly constructed dynamical systems are used to sample from high-dimensional probability distributions. We will highlight recent advances in numerical methods that utilize tools in one field to solve problems in the other in a novel fashion. We will exhibit new algorithms for chaotic sensitivity analysis, rare event simulation, stochastic optimal control, and data assimilation.
The synthesis of various information sources, including a priori domain knowledge, statistical assumptions, field data, etc., large-scale numerical models is one of the key steps in building interpretable and predictive models for supporting critical decisions in science, engineering, medicine, and beyond. Typical examples can be found in oil/gas reservoir modeling, treatment of saltwater intrusion, medical imaging, tumor treatment, aircraft design. Because of the computationally costly nature of the numerical models and stringent requirements on the accuracy of the statistical learning outcomes, multilevel and multi-fidelity methods provide a viable route for solving these model-based statistical learning tasks. This mini-symposium will bring together researchers working on the forefront of multilevel and multi-fidelity methods (and other relevant methods) intended to accelerate model-based statistical learning tasks.
Rough volatility models are an increasingly popular class of models in quantitative finance. In contrast to conventional stochastic volatility models, the volatility is driven by a fractional Brownian motion with Hurst index H < 1/2 which is rougher than Brownian motion. This change greatly improves the fit to time series data of underlying asset prices as well as to option prices, see, for instance, [Bayer, Friz, Gatheral, Quantitative Finance 16(6), 887-904, 2016]. Hence, introducing non-Markovian noise improves the predictive power of the model while maintaining parsimoniousness. Unfortunately, the loss of the Markov property poses severe challenges for theoretical and numerical analyses as well as for computational practice.
This minisymposium brings together different approaches for various UQ tasks in the context of rough volatility models and predictive models in finance. The problems addressed range from calibration and statistical analysis of the model parameters to optimal control of rough volatility models. To overcome the considerable practical hurdles posed by the lack of Markovianity, the contributions to the minisymposium use diverse tools such as deep neural networks and large deviation theory, assisted by properly analyzed simulation techniques.
[ Moved from MW HS 2235 ]
Data-science and numerical simulation are moving rapidly toward a workflow based approach for complex multiscale or multiphysics problems, which better suits the many-tasks paradigm followed by HPC centers on the path to exascale. As a result, a wide range of tools and frameworks (both generic and domain specific) have been developed over the years in order to support scientists in designing, implementing and running their complex simulations and workflows efficiently on HPC systems.
In order to produce "actionable" results, these simulations and workflows need to be validated, verified and equipped with uncertainty quantification (VVUQ) such that their output may be relied upon when making important decisions in various domains. The VECMA project (https://www.vecma.eu) aims at developing an open source toolkit (https://www.vecma-toolkit.eu) to ease, and automate where possible, the addition of VVUQ into such multiscale or multiphysics simulations.
In this minisymposium we invite developers of this toolkit to present its most recent version, and researchers in various domains (Fusion, Materials, Climate, Bio-medicine, etc...) to present how it can be integrated into existing applications in order to add VVUQ capabilities.
Advances in computational medicine have made mathematical modeling of hemodynamics a key area of scientific research. Innovations in high performance computing and high-fidelity models allows for sophisticated approximations of in-vivo cardiovascular dynamics. To this end, a variety of models including system level 0D models, 1D fluid dynamics network models, and 3D fluid structure interaction models, can be used to investigate structure-function relation of the cardiovascular system, on a local, global, or multiscale level. However, these computational models are susceptible to both model discrepancy and uncertainty in model inputs, and predictions. Cardiovascular models are calibrated to sparse data, i.e. they contain parameters unmeasurable in-vivo, making parameter estimation and forward uncertainty propagation difficult. This minisymposium will focus on cardiovascular inverse problems and statistical inference methodology including:
• Parameter estimation techniques for complex ODE-PDE coupled models
• Novel emulation and metamodeling procedures for high-fidelity models
• Advances in surrogate and low-fidelity model construction
• Quantification of model consistency using machine-learning
• Efficient uncertainty propagation and quantification
• Innovative numerical and analytical sensitivity techniques
Markov chain Monte Carlo (MCMC) methods have become a well-established tool for approximate sampling in computational statistics and are indespensible for the quantification of uncertainty. One of the main advantages of MCMC approaches, shown in numerical experiments and proven theoretically, is the robust behavior w.r.t. the dimension, such that in high-dimensional scenarios those are often the method of choice. In the recent years there have been several new advances in algorithmic as well as theoretical aspects of these methods. For instance, Wasserstein contraction in order to prove ergodicity of Markov chains, MCMC methods for inference on non-Euclidean spaces such as Riemannian manifolds, and efficient Metropolis-Hastings algorithms for highly concentrated target measures.
The goal of this session is to discuss these recent developments and the dimension dependence of MCMC from a theoretical as well as a practical point of view.
Deep learning techniques are becoming the center of attention across many scientific disciplines. Many predictive tasks are currently being tackled using over-parameterized, black-box discriminative models such as deep neural networks, in which interpretability and robustness is often sacrificed in favor of flexibility in representation and scalability in computation. Such models have yielded remarkable results in data-rich domains, yet their effectiveness in data-scarce and risk-sensitive tasks still remains questionable, primarily due to open challenges in statistical inference and uncertainty quantification. This mini-symposium invites contributions on uncertainty quantification methods for deep learning and their application in the physical and engineering sciences. Topics include (but are not limited to) Bayesian neural networks, deep generative models, posterior inference techniques, and applications to forward/inverse problems, active learning, Bayesian optimization and reinforcement learning.
With the ever increasing importance of UQ in various disciplines and fields, software solutions and libraries for UQ problems get more and more important. Progress and use of UQ techniques relies on the availability of software features and support. This raises interesting questions for the UQ community such as: What are the current properties of available tools? For which classes of problems have they been developed? What methods or algorithms do they provide? What are challenges for UQ software and which resources are required? What are recent improvements? What are the next steps and the long-term goals of the development?
This minisymposium brings together experts for different software in the context of UQ, ranging from tools that ease up individual tasks of UQ (such as surrogate modelling, UQ workflows, dimensionality reduction, data augmentation) up to whole frameworks for solving UQ problems. The minisymposium will foster discussion and exchange of ideas between developers and (prospective) users.
Reproducing kernel Hilbert spaces are ubiquitous in applied mathematics and statistics due to the tractability provided by the reproducing property. They commonly underpin the theoretical analysis of stochastic processes (including Gaussian processes) and have historically been used to construct a variety of numerical schemes for interpolation, integration or solving differential equations. Additionally, within the machine learning literature, the last decade has seen fruitful research into the use of kernels in statistical tests, statistical estimators and sampling methods.
In this two-part mini-symposium, we propose to explore these more recent works and highlight their relevance to uncertainty quantification. The first session will focus on the use of kernel-based probability metrics and statistical divergences to construct statistical estimators and hypothesis tests for high-dimensional models or models with intractable likelihoods. The second session will focus on applications of kernels to problems in Monte Carlo methods and approximation of probability measures.