The amount of data in existence is growing exponentially. This has lead to the development of an unavoidable basin of attraction in data-driven uncertainty quantification (UQ) approaches for large-scale or high dimensional (UQ) problems. However, it is still in its infancy and new ideas are needed for this core research area.
The goal of our minisymposium is to provide a forum for this diverse group to discuss and share ideas for developing data-driven UQ approaches. These advanced UQ methods involve (but are not limited to) machine learning, neural network, model reduction as well as advances in Bayesian framework. Various applications will be used to show the performance of these improved UQ approaches.
08:30
- CANCELED - Integral Representations for Neural Networks with Applications to Uncertainty Quantification
Armenak Petrosyan | Oak Ridge National Lab | United States
Show details
Author:
Armenak Petrosyan | Oak Ridge National Lab | United States
We present neural network integral representations as a generalization of shallow artificial neural networks. For the ReLU activation function, we derive an explicit reconstruction formula on the unit sphere under finite $L_1$ norm assumption on the outer weights. Discretization of such integral representations provide constructive initializations for neural network training. In the context of uncertainty quantification, we further investigate uncertainties resulting from discretization of neural network integral representations.
09:00
Physics-Informed Recurrent Neural Networks for Land Model Surrogate Construction for Uncertainty Quantification
Vishagan Ratnaswamy | Sandia National Lab | United States
Show details
Authors:
Vishagan Ratnaswamy | Sandia National Lab | United States
Cosmin Safta | Sandia National Lab | United States
Khachik Sargsyan | Sandia National Lab | United States
Uncertainty quantification (UQ) for complex models is often prohibitively expensive. Surrogate construction is a necessary step before major UQ tasks, e.g., global sensitivity analysis or model calibration can be accomplished. Using an ensemble of model training simulations, the construction of a model surrogate is a supervised machine learning (ML) task. For expensive models with a large number of input parameters, the critical challenge is to create high-fidelity surrogates with as few model training evaluations as possible. Our application of interest is a simplified land model (sELM) mimicking the land component of the Energy Exascale Earth System Model (E3SM), and simulating the feedback between the climate and carbon interactions while accounting for the biochemistry.
We consider supervised ML approaches to construct surrogate models for sELM in order to approximate the input-output relationships between approximately 50 physical input parameters and a set of output quantities of interest (QoIs). Specifically, we build on a long-short term memory (LSTM) recurrent neural network, taking into account the already known interactions between input processes and output QoIs. Such a physics-informed architecture is shown to outperform vanilla implementations of LSTM or feed-forward neural networks. We then employ the resulting tree-LSTM surrogates to carry out global sensitivity analysis and model calibration given observational data on select QoIs.
09:30
Capturing Training Uncertainty Using Bayesian Neural Networks in a Rotorcraft Ice Detection Application
Jeremiah Hauth | University of Michigan Ann Arbor | United States
Show details
Authors:
Jeremiah Hauth | University of Michigan Ann Arbor | United States
Xun Huan | University of Michigan Ann Arbor | United States
Beckett Y. Zhou | TU Kaiserslautern | Germany
Nicholas R. Gauger | TU Kaiserslautern | Germany
Myles Morelli | Politecnico di Milano | Italy
Alberto Guardone | Politecnico di Milano | Italy
Deep neural networks (DNNs) can achieve accurate and rapid predictions, and are excellent surrogate models within a multi-query context such as UQ and optimization. However, their data-driven nature inherently moves away from physics-based principles, and it is thus crucial to quantify the uncertainty induced in the training of DNNs, especially when used to support high-consequence decision-making. We employ Bayesian neural networks (BNNs) that treat DNN weight parameters as random variables and infer them via Bayes theorem. Computing the posterior distribution using traditional techniques such as Markov chain Monte Carlo is generally impractical, since DNNs easily contain thousands or millions of parameters. We explore several approximate methods for high-dimensional inference, including mean-field and fully-correlated variational Bayes, and Stein variational inference. We assess their tradeoffs of cost, accuracy, and importance of correlation. In a proof-of-concept application on in-flight detection of rotorcraft blade icing, we build a BNN from a database of computational fluid dynamic simulations to directly map acoustic signals to aerodynamic performance metrics, thus bypassing the expensive inverse problems. The BNN is able to produce a distribution of predictions instead of a single-value output, reflecting the quality and confidence of the data-driven model, and offering valuable information for pilot decision-making in potentially dangerous flight conditions.
10:00
Uncertainty Quantification for Nonlinear PDEs with Noisy Data Using Deep Flow-based Generative Models
Liu Yang | Brown University | United States
Show details
Authors:
Liu Yang | Brown University | United States
Xuhui Meng | Brown University | United States
George Em Karniadakis | Brown University | United States
We propose to use deep flow-based generative models to solve forward and inverse partial differential equation (PDE) problems with noisy data. By parametrizing the unknown terms in PDE into a specific surrogate model, we formulate the forward/inverse problem as a Bayesian inference problem. The deep flow-based generative models were used to estimate the posterior statistics. Apart from linear PDEs, the proposed method can also handle nonlinear PDEs. Due to the Bayesian framework, the proposed method could take the data noise into consideration, and endow the solutions with uncertainty quantification.