Inverse and big-data problems are widespread in computational sciences and engineering. Despite formidable advances in recent years on all frontiers, ranging from pure mathematics to computational sciences, significant challenges remain, especially when it comes to addressing data-driven problems. In inverse/learning problems, parameters are typically related to indirect measurements by a system of partial differential equations (PDEs) or a network, which could be highly nonlinear and nonconvex. Available indirect data are often noisy, and subject to natural variation, while the unknown parameters of interest are high dimensional, or possibly infinite-dimensional in principle. Bayesian inference provides a systematic framework that rigorously that allows us to quantify the uncertainty in the inverse/learning problems, and to assess model validity and adequacy. Since the amount of data we wish to process is only going to increase for the foreseeable future, there is a critical need for effective algorithms that integrate data with simulations and learning approaches that are computation- and data-scalable. This minisymposium aims to attract researchers at the forefront of inverse and learning problems, data science, and data-intensive problems to present their latest work on computation- and data-scalable algorithms in inverse problems and learning.
Uncertainty plays a major role in using mathematics to address biological and medical questions, specifically when analyzing real-world data. This minisymposium features recent mathematical and computational advances in solving inverse problems and quantifying uncertainties for a wide variety of biological and biomedical applications. Topics include development of numerical methods, model reduction, parameter estimation, and data-driven approaches for applications such as safety pharmacology, cell metabolism, tumor growth, and blood coagulation.
Over the past two decades, we've witnessed two revolutions in applied mathematics and high-dimensional approximation: the rise of sparse reconstruction techniques driven by compressed sensing, and a transformation in data science driven by machine learning with deep neural networks, a.k.a, deep learning. The former seeks to find a compressible representation of a given target function or signal, exploiting structure such as sparsity, parametric smoothness, or low-dimensionality of the solution manifold. The latter seeks to construct a nonlinear approximation from a given dataset, which generalizes well on unseen data points, through a series of compositions of affine and nonlinear mappings. This minisymposium highlights connections between these two topics, with particular attention to recent advances in the theory and algorithms in both approaches, as applied to problems in uncertainty quantification. By bringing together researchers from these two emerging fields, we hope to foster discussion and collaboration on novel theoretical and computational advances in sparse approximation and deep learning, leading to new directions for research.
The MS focuses on the process of modeling, quantifying and estimating the effects of uncertainties that characterize irreversible/dissipative material behavior in quasi-static and dynamic conditions. Particular examples of great significance include metal fatigue and concrete fracture analysis, as well as material aging of bone tissues. Moreover, special attention will be paid to the multi-scale and multi-fidelity nature of these problems, as well as to Bayesian analysis and corresponding design of experiments. Numerical tools to be discussed are low-rank functional approximations, Bayesian learning, optimization, stochastic Galerkin, polynomial chaos expansion, and stochastic homogenization, to name just a few.
The United States Department of Energy (DOE) Laboratory System grew out of the federally-funded scientific developments of World War II. Today, the national laboratories comprise one of the world’s largest scientific research systems. Tackling areas such as environmental modeling, precision medicine, and global security, the DOE laboratories are at the forefront of scientific innovation and, thus, have access to unique research problems, data sets, and facilities. This minisymposium will showcase the many applications and innovations in UQ stemming from the challenges of the national lab environment.
LLNL-ABS-791303. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Computer simulation models a.k.a. simulators are used nowadays in virtually all fields of applied science and engineering. Usually, simulators that predict quantities of interests (QoI) as a function of input parameters are deterministic, i.e. they can be considered as a mapping from an input- to an output space. Running the simulator twice with the same input values provides identical outputs.
In contrast, so-called stochastic simulators contain hidden sources of uncertainty (e.g. latent variables) or uncontrollable inputs, on top of the well-identified and controllable inputs, meaning that repeated runs with the same inputs provides different results. Of interest is the resulting distribution of the QoI conditioned by the input (controllable) parameters. This distribution can be characterized in a rough way by replicating the runs of the simulator for the same controllable inputs. Unfortunately, in the context of uncertainty propagation and sensitivity analysis, handling stochastic simulators may be highly demanding due to these replications. One appropriate solution can be to use surrogate models (also referred to metamodels) to approximate the conditional expectation of the model, from a limited number of simulations.
In this MS, we will present recent developments in the field of surrogate models for stochastic simulators, be it for uncertainty propagation, sensitivity analysis or robust design.
The main topics of the mini-symposium include model uncertainty, robust uncertainty quantification & optimization, and their implications in predictive modeling guarantees and rare-event analysis. We aim at bringing together closely related but possibly disparate communities in applied mathematics, applied probability, information theory, operations research, optimization and economics, to foster interdisciplinary discussions and collaborations. Speakers will demonstrate recent mathematical and conceptual developments of related UQ methods, and also their applications ranging from engineering design of materials to econometrics and risk analysis.
It is a story as old as time. Models rife with uncertainty are developed for intriguing applications while simultaneously uncertainty quantification (UQ) methods are rapidly advanced. Yet, when the developers of the models and methods meet, it is rarely love at first sight. Either the UQ questions the modeler asks are like the third cousin to those the methods are intended to answer or the methods require certain types or quantities of data for which the modeler is not prepared to deliver. This minisymposium brings together pairs of collaborative researchers giving coordinated presentations on how an application and UQ method were finally joined in harmony. The first presentation focuses on the application, modeling, and types of UQ questions the researchers seek to answer. The second presentation focuses on how a UQ method was tailored to answer these questions under the constraints of the model.
Data assimilation in Earth system models combines high-dimensional, coupled, nonlinear models with large volumes of in situ and remotely sensed observational data. The dynamics and observations are nonlinear, the distributions are non-Gaussian, and the cost of simulation is high. The goal of the minisymposium is to provide a forum for this diverse group to discuss and share ideas for advancing the science of DA in climate modeling or any of its components (e.g. atmosphere, ocean, ice sheets, land models, or sea ice). Topics of interest include coupled data assimilation; strategies for estimating and mitigating model errors; strategies for addressing strong nonlinearities and non-Gaussianity; multiscale, multilevel, or multifidelity methods; and machine learning methods for data assimilation.
The amount of data in existence is growing exponentially. This has lead to the development of an unavoidable basin of attraction in data-driven uncertainty quantification (UQ) approaches for large-scale or high dimensional (UQ) problems. However, it is still in its infancy and new ideas are needed for this core research area.
The goal of our minisymposium is to provide a forum for this diverse group to discuss and share ideas for developing data-driven UQ approaches. These advanced UQ methods involve (but are not limited to) machine learning, neural network, model reduction as well as advances in Bayesian framework. Various applications will be used to show the performance of these improved UQ approaches.
Despite the remarkable growth in computational power, it is still very computationally expensive to simulate most real-world systems in full detail, including a comprehensive analysis of parameter and model uncertainty. In such situations, data-driven approaches are tractable computational methods that provide useful empirical characterizations of uncertainty and have been successfully exploited in recent years. The increasing availability of very large data sets for this purpose makes techniques in machine learning an attractive toolbox for uncertainty emulation and characterization.
This minisymposium focuses on recent advances in uncertainty quantification algorithmic developments and applications based on data-driven and machine learning approaches in large-scale applications. Topics include data-driven surrogate construction, data assimilation, and physics-informed machine learning based on a limited number of data/observations and provide guidance for the system design, forecasting, etc.
The synthesis of various information sources, including a priori domain knowledge, statistical assumptions, field data, etc., large-scale numerical models is one of the key steps in building interpretable and predictive models for supporting critical decisions in science, engineering, medicine, and beyond. Typical examples can be found in oil/gas reservoir modeling, treatment of saltwater intrusion, medical imaging, tumor treatment, aircraft design. Because of the computationally costly nature of the numerical models and stringent requirements on the accuracy of the statistical learning outcomes, multilevel and multi-fidelity methods provide a viable route for solving these model-based statistical learning tasks. This mini-symposium will bring together researchers working on the forefront of multilevel and multi-fidelity methods (and other relevant methods) intended to accelerate model-based statistical learning tasks.
Uncertainty Quantification techniques are by now mature enough to address realistic, large scale problems of significant relevance. In this minisymposium, we focus in particular on complex fluid dynamics problems for engineering and environmental applications, that are fields in which computational science has traditionally played a major role. Several different kinds of UQ analyses naturally arise in these fields: forward UQ, optimization under uncertainty, inverse problem and data assimilation (e.g. for real-time control). In these scenarios, non-standard randomness might occur, and the complexity of the governing PDE equations further introduces significant and fascinating theoretical and computational challenges. In particular, polynomial-based UQ methods like Polynomial Chaos or Sparse grids collocation might not work well, in which case one has to resort to sampling methods, control variates, and more recently, machine learning techniques. Ad-hoc algorithms for high-performance computing are also relevant in this framework and welcome in this minisymposium.
Computer modeling makes possible the simulation of shoreline hazards, from tsunamis and tropical storms. To make predictions of these hazard events requires many simulations, to explore the high dimensional space of input parameters, and massive computational budgets. Data science methods are needed to detect nascent storms and tsunami waves and feed information to simulation models, to monitor the evolving hazard or make long-term predictions. Statistical emulators can estimate the output of simulations and greatly reduce the computational burden. However the necessary outputs are often spatio-temporal fields, and conventional methods for constructing emulators cannot be applied. This mini-symposium, which emerged from research activity during the 2018-19 SAMSI program on Uncertainty Quantification, will bring together scientists working on computational and statistical methodology to better predict and track storms and tsunamis.
Many statistical models of interest in engineering, the sciences, and machine learning define a likelihood function that is computationally prohibitive to evaluate. This may be induced from the model only being known through a data generating process or the likelihood function involving a high-dimensional integral (e.g., from a marginalization procedure or the computation of a normalizing constant). In these cases, it is difficult to apply classical inference methods such as maximum likelihood estimation or likelihood-based Bayesian inference algorithms. To enable inference in these settings, several approaches have been developed in the statistics and machine learning community that avoid direct evaluation of the likelihood function (e.g., approximate Bayesian computation). Despite these success, efficiently solving such problems remains challenging, especially in high dimensions, or when only limited information or few samples are available. This mini-symposium will explore new algorithms and methodologies for performing likelihood-free inference in these complex models.
The evaluation of failure probabilities is a fundamental problem in reliability analysis and risk management of systems with uncertain inputs. We consider systems described by PDEs with random coefficients together with efficient approximation schemes. This includes stochastic finite elements, collocation, reduced basis, and advanced Monte Carlo methods. Efficient evaluation and updating of small failure probabilities and rare events remains a significant computational challenge. This mini-symposium brings together tools from applied probability, numerical analysis, and computational science and engineering. We showcase advances in analysis and computational treatment of rare events and failure probabilities, including variance reduction, advanced meta-models, and multilevel Monte Carlo.
The understanding and incorporation of data within models has become a vital component of applied mathematics. A fundamental one can ask is given noisy measurements of data, how to recover some unknown quantity of interest. Some examples of these fields include in- verse problems which is primarily concerned with parameter estimation and data assimilation for state estimation. Both fields have seen a considerable amount of attention due to recent advance- ments in terms of both classical and statistical approaches. In particular, this mini-symposium will consider particle methods for solving inverse problems with the help optimization tools as well as particle methods aiming to represent the posterior distribution in a bayesian point of view for inverse problems.
The motivation behind this mini-symposium is to bring together experts from both schools. This would provide a complimentary field to the mini-symposium where connections between both areas are currently being developed.
The challenge of acquiring the most valuable data from experiments—for the purpose of inference, prediction, design, or control—has received substantial attention in statistics, applied mathematics, and engineering and science. This task can be formalized through the framework of optimal experimental design (OED). Models describing experimental conditions and processes, both physical and statistical, can be particularly useful for arriving at these optimal designs. However, model-based OED faces many challenges, such as formulational difficulties, choices of optimality criteria, computation of information metrics, handling nonlinear responses and non-Gaussian distributions, and dealing with expensive and dynamically evolving simulations. This minisymposium invites researchers of model-based optimal experimental design, in the broad areas of computational and applications-oriented developments.
Inverse and big-data problems are widespread in computational sciences and engineering. Despite formidable advances in recent years on all frontiers, ranging from pure mathematics to computational sciences, significant challenges remain, especially when it comes to addressing data-driven problems. In inverse/learning problems, parameters are typically related to indirect measurements by a system of partial differential equations (PDEs) or a network, which could be highly nonlinear and non-convex. Available indirect data are often noisy, and subject to natural variation, while the unknown parameters of interest are high dimensional, or possibly infinite-dimensional in principle. Bayesian inference provides a systematic framework that rigorously that allows us to quantify the uncertainty in the inverse/learning problems, and to assess model validity and adequacy. Since the amount of data we wish to process is only going to increase for the foreseeable future, there is a critical need for effective algorithms that integrate data with simulations and learning approaches that are computation- and data-scalable. This minisymposium aims to attract researchers at the forefront of inverse and learning problems, data science, and data-intensive problems to present their latest work on computation- and data-scalable algorithms in inverse problems and learning.
Uncertainty plays a major role in using mathematics to address biological and medical questions, specifically when analyzing real-world data. This minisymposium features recent mathematical and computational advances in solving inverse problems and quantifying uncertainties for a wide variety of biological and biomedical applications. Topics include development of numerical methods, model reduction, parameter estimation, and data-driven approaches for applications such as safety pharmacology, cell metabolism, tumor growth, and blood coagulation.
Over the past two decades, we've witnessed two revolutions in applied mathematics and high-dimensional approximation: the rise of sparse reconstruction techniques driven by compressed sensing, and a transformation in data science driven by machine learning with deep neural networks, a.k.a, deep learning. The former seeks to find a compressible representation of a given target function or signal, exploiting structure such as sparsity, parametric smoothness, or low-dimensionality of the solution manifold. The latter seeks to construct a nonlinear approximation from a given dataset, which generalizes well on unseen data points, through a series of compositions of affine and nonlinear mappings. This minisymposium highlights connections between these two topics, with particular attention to recent advances in the theory and algorithms in both approaches, as applied to problems in uncertainty quantification. By bringing together researchers from these two emerging fields, we hope to foster discussion and collaboration on novel theoretical and computational advances in sparse approximation and deep learning, leading to new directions for research.
The MS focuses on the process of modeling, quantifying and estimating the effects of uncertainties that characterize irreversible/dissipative material behavior in quasi-static and dynamic conditions. Particular examples of great significance include metal fatigue and concrete fracture analysis, as well as material aging of bone tissues. Moreover, special attention will be paid to the multi-scale and multi-fidelity nature of these problems, as well as to Bayesian analysis and corresponding design of experiments. Numerical tools to be discussed are low-rank functional approximations, Bayesian learning, optimization, stochastic Galerkin, polynomial chaos expansion, and stochastic homogenization, to name just a few.
The United States Department of Energy (DOE) Laboratory System grew out of the federally-funded scientific developments of World War II. Today, the national laboratories comprise one of the world’s largest scientific research systems. Tackling areas such as environmental modeling, precision medicine, and global security, the DOE laboratories are at the forefront of scientific innovation and, thus, have access to unique research problems, data sets, and facilities. This minisymposium will showcase the many applications and innovations in UQ stemming from the challenges of the national lab environment.
LLNL-ABS-791303. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Computer simulation models a.k.a. simulators are used nowadays in virtually all fields of applied science and engineering. Usually, simulators that predict quantities of interests (QoI) as a function of input parameters are deterministic, i.e. they can be considered as a mapping from an input- to an output space. Running the simulator twice with the same input values provides identical outputs.
In contrast, so-called stochastic simulators contain hidden sources of uncertainty (e.g. latent variables) or uncontrollable inputs, on top of the well-identified and controllable inputs, meaning that repeated runs with the same inputs provides different results. Of interest is the resulting distribution of the QoI conditioned by the input (controllable) parameters. This distribution can be characterized in a rough way by replicating the runs of the simulator for the same controllable inputs. Unfortunately, in the context of uncertainty propagation and sensitivity analysis, handling stochastic simulators may be highly demanding due to these replications. One appropriate solution can be to use surrogate models (also referred to metamodels) to approximate the conditional expectation of the model, from a limited number of simulations.
In this MS, we will present recent developments in the field of surrogate models for stochastic simulators, be it for uncertainty propagation, sensitivity analysis or robust design.
The main topics of the mini-symposium include model uncertainty, robust uncertainty quantification & optimization, and their implications in predictive modeling guarantees and rare-event analysis. We aim at bringing together closely related but possibly disparate communities in applied mathematics, applied probability, information theory, operations research, optimization and economics, to foster interdisciplinary discussions and collaborations. Speakers will demonstrate recent mathematical and conceptual developments of related UQ methods, and also their applications ranging from engineering design of materials to econometrics and risk analysis.
It is a story as old as time. Models rife with uncertainty are developed for intriguing applications while simultaneously uncertainty quantification (UQ) methods are rapidly advanced. Yet, when the developers of the models and methods meet, it is rarely love at first sight. Either the UQ questions the modeler asks are like the third cousin to those the methods are intended to answer or the methods require certain types or quantities of data for which the modeler is not prepared to deliver. This minisymposium brings together pairs of collaborative researchers giving coordinated presentations on how an application and UQ method were finally joined in harmony. The first presentation focuses on the application, modeling, and types of UQ questions the researchers seek to answer. The second presentation focuses on how a UQ method was tailored to answer these questions under the constraints of the model.
Data assimilation in Earth system models combines high-dimensional, coupled, nonlinear models with large volumes of in situ and remotely sensed observational data. The dynamics and observations are nonlinear, the distributions are non-Gaussian, and the cost of simulation is high. The goal of the minisymposium is to provide a forum for this diverse group to discuss and share ideas for advancing the science of DA in climate modeling or any of its components (e.g. atmosphere, ocean, ice sheets, land models, or sea ice). Possible topics of interest include coupled data assimilation; strategies for estimating and mitigating model errors; strategies for addressing strong nonlinearities and non-Gaussianity; multiscale, multilevel, or multifidelity methods; and machine learning methods for data assimilation.
The amount of data in existence is growing exponentially. This has lead to the development of an unavoidable basin of attraction in data-driven uncertainty quantification (UQ) approaches for large-scale or high dimensional (UQ) problems. However, it is still in its infancy and new ideas are needed for this core research area.
The goal of our minisymposium is to provide a forum for this diverse group to discuss and share ideas for developing data-driven UQ approaches. These advanced UQ methods involve in (but not limited to) machine learning, neural network, model reduction as well as advances in Bayesian framework. Various applications will be used to show the performance of these improved UQ approaches.
Despite the remarkable growth in computational power, it is still very computationally expensive to simulate most real-world systems in full detail, including a comprehensive analysis of parameter and model uncertainty. In such situations, data-driven approaches are tractable computational methods that provide useful empirical characterizations of uncertainty and have been successfully exploited in recent years. The increasing availability of very large data sets for this purpose makes techniques in machine learning an attractive toolbox for uncertainty emulation and characterization.
This minisymposium focuses on recent advances in uncertainty quantification algorithmic developments and applications based on data-driven and machine learning approaches in large-scale applications. Topics include data-driven surrogate construction, data assimilation, and physics-informed machine learning based on a limited number of data/observations and provide guidance for the system design, forecasting, etc.
OpenTURNS is an open source library for uncertainty propagation by probabilistic methods. Developed by a partnership of five industrial companies (EDF, Airbus, Phimeca, IMACS and ONERA), it benefits from a strong practical feedback. Classical algorithms of UQ are available: central dispersion, probability of exceedance, sensitivity analysis, metamodels and stochastic processes. Developed in C++, OpenTURNS is also available as a Python module and has gained maturity thanks to more than 10 years of development. The goal of this minisymposium is to gather the OpenTURNS community and get an overview of the trends within the software, the associated research topics and its industrial uses.
Uncertainty Quantification techniques are by now mature enough to address realistic, large scale problems of significant relevance. In this minisymposium, we focus in particular on complex fluid dynamics problems for engineering and environmental applications, that are fields in which computational science has traditionally played a major role. Several different kinds of UQ analyses naturally arise in these fields: forward UQ, optimization under uncertainty, inverse problem and data assimilation (e.g. for real-time control). In these scenarios, non-standard randomness might occur, and the complexity of the governing PDE equations further introduces significant and fascinating theoretical and computational challenges. In particular, polynomial-based UQ methods like Polynomial Chaos or Sparse grids collocation might not work well, in which case one has to resort to sampling methods, control variates, and more recently, machine learning techniques. Ad-hoc algorithms for high-performance computing are also relevant in this framework and welcome in this minisymposium.
Many statistical models of interest in engineering, the sciences, and machine learning define a likelihood function that is computationally prohibitive to evaluate. This may be induced from the model only being known through a data generating process or the likelihood function involving a high-dimensional integral (e.g., from a marginalization procedure or the computation of a normalizing constant). In these cases, it is difficult to apply classical inference methods such as maximum likelihood estimation or likelihood-based Bayesian inference algorithms. To enable inference in these settings, several approaches have been developed in the statistics and machine learning community that avoid direct evaluation of the likelihood function (e.g., approximate Bayesian computation). Despite these success, efficiently solving such problems remains challenging, especially in high dimensions, or when only limited information or few samples are available. This mini-symposium will explore new algorithms and methodologies for performing likelihood-free inference in these complex models.
The evaluation of failure probabilities is a fundamental problem in reliability analysis and risk management of systems with uncertain inputs. We consider systems described by PDEs with random coefficients together with efficient approximation schemes. This includes stochastic finite elements, collocation, reduced basis, and advanced Monte Carlo methods. Efficient evaluation and updating of small failure probabilities and rare events remains a significant computational challenge. This mini-symposium brings together tools from applied probability, numerical analysis, and computational science and engineering. We showcase advances in analysis and computational treatment of rare events and failure probabilities, including variance reduction, advanced meta-models, and multilevel Monte Carlo.
The understanding and incorporation of data within models has become a vital component of applied mathematics. A fundamental one can ask is given noisy measurements of data, how to recover some unknown quantity of interest. Some examples of these fields include in- verse problems which is primarily concerned with parameter estimation and data assimilation for state estimation. Both fields have seen a considerable amount of attention due to recent advancements in terms of both classical and statistical approaches. In particular, this mini-symposium will consider particle methods for solving inverse problems with the help optimization tools as well as particle methods aiming to represent the posterior distribution in a bayesian point of view for inverse problems.
The motivation behind this mini-symposium is to bring together experts from both schools. This would provide a complimentary field to the mini-symposium where connections between both areas are currently being developed.
The challenge of acquiring the most valuable data from experiments—for the purpose of inference, prediction, design, or control—has received substantial attention in statistics, applied mathematics, and engineering and science. This task can be formalized through the framework of optimal experimental design (OED). Models describing experimental conditions and processes, both physical and statistical, can be particularly useful for arriving at these optimal designs. However, model-based OED faces many challenges, such as formulational difficulties, choices of optimality criteria, computation of information metrics, handling nonlinear responses and non-Gaussian distributions, and dealing with expensive and dynamically evolving simulations. This minisymposium invites researchers of model-based OED, in the broad areas of computational and applications-oriented developments.