Data collection has always been plagued by the presence of both errors in acquisition and various sources of uncertainty. Given significant growth in the field of uncertainty quantification (UQ), it is imperative for the data analysis to provide errors and uncertainties that are interpretable within the given field of science. Experimental data provide insight into the physical processes beyond what current physics models can provide, and data-driven UQ strengthens the predictive power and enables validated simulations. Embedding physical understanding of the system within the UQ analysis is crucial to meaningful interpretation of data in science-based applications. In this minisymposium, several applications of real world problems that necessitate UQ informed by both physical models and data will be presented.
10:30
Uncertainty Quantification Applied to Convolutional Neural Network Analyses
Garry Maskaly | Los Alamos National Laboratory | United States
Show details
Authors:
Garry Maskaly | Los Alamos National Laboratory | United States
Kyle Hickmann | Los Alamos National Laboratory | United States
Convolutional neural networks (CNNs) have risen to the forefront of solutions to image classification, and computer vision, tasks. However, the ability to apply theoretically grounded uncertainty quantification to a CNN prediction is an underdeveloped research area. The majority of UQ methods applied to deep learning focus on training ensembles of networks with similar architectures to understand UQ. In this work we demonstrate a novel approach to UQ in CNN prediction by re-phrasing the question being asked so that it is more amenable to incorporation of uncertainty in the training data. We present work on classification of diabetic retinopathy images into categories of severity. By treating this as a regression problem, we show that uncertainty in labeling can be explicitly trained into network structure. We then demonstrate, through careful selection of architectures, we are able to ensure the networks are looking at features in the data that align with "expert knowledge". We then show a similarity measure applied to the images during a pre-screening of the imagery data. Once the data set is organized by this measure, we show the prediction error on a test set as a function of the test set's dissimilarity with the training ensemble. This highlights the features causing prediction error which can be used to assess uncertainty in a prediction a priori.
11:00
- CANCELED - Uncertainty Quantification is not Enough to Reach Confident Decisions
Francois Hemez | Lawrence Livermore National Laboratory | United States
Show details
Author:
Francois Hemez | Lawrence Livermore National Laboratory | United States
High-consequence decisions, such as deciding how to respond to a hurricane, anticipating consequences of an epidemic, or responding to a terrorist attack, are increasingly informed by predictions from numerical simulations. These problems often involve the analysis of multi-physics and multi-scale models that require tremendous computational resources. Interpreting the results is increasingly augmented by machine learning algorithms. While they help to streamline an analysis, these powerful resources nevertheless tend to increase, rather than decrease, the sources of uncertainty. It renders the need for uncertainty quantification even more acute.
Or does it? We contend that what makes decision-making challenging is how it is managed and communicated. The three broad classes of uncertainty in science-based simulations are highlighted: variability and randomness, numerical uncertainty, and model-form uncertainty. Each class is different, potentially requiring different strategies for quantification. The principle for management is universal and overarching: how to reach confident decisions? What breeds confidence is the ability to establish how various outcomes might be impacted by potentially erroneous modeling assumptions. The discussion is illustrated with examples in computational physics (i.e., dispersion of dangerous materials in urban environments), engineering mechanics (i.e., design of medical devices), and financial modeling (i.e., forecasting of stock performance).
11:30
A Bayesian formulation for spatially varying multi-regularization image deblurring problems
Jessica Pillow | The University of Arizona | United States
Show details
Authors:
Jessica Pillow | The University of Arizona | United States
Matthias Morzfeld | The University of Arizona | United States
Jesse Adams | Nevada National Security Site | United States
Marylesa Howard | Nevada National Security Site | United States
In the presence of noise, image deblurring is an ill-posed inverse problem in which regularization is required to obtain useful reconstructions. Choosing the appropriate strength of the regularization, however, is difficult. Moreover, many images contain sharp edges as well as smooth features which requires the use of multi-regularization, i.e., the type of regularization (total variation or Tikhonov) varies across the image. We address these two issues by formulating the image deblurring problem within a hierarchical Bayesian framework, varying both the strength of the regularization, as well as the regularization type across the image. In this way, the strength of the regularization, which varies across the image, as well as the image are described by a hierarchical posterior distribution which we can sample by Markov chain Monte Carlo (MCMC), in particular Gibbs samplers that make use of conditional distributions for efficient sampling. We compute the means of the image and parameter samples, and we analyze the samples' credibility intervals to assess uncertainty.
12:00
- CANCELED - Quantifying uncertainty from multiple sources in calibrated models of sparse inertial confinement fusion experiments
Jim Gaffney | Lawrence Livermore National Laboratory | United States
Show details
Authors:
Jim Gaffney | Lawrence Livermore National Laboratory | United States
Kelli Humbird | Lawrence Livermore National Laboratory | United States
Brian Spears | Lawrence Livermore National Laboratory | United States
High-throughput simulation studies, combined with experimental data, are enabling the development of statistically calibrated, data-driven models for the performance of indirect drive inertial confinement fusion (ICF). These models match a diverse set of experimental observables over a series of experiments and provide predictions, with uncertainties, over a wide range of experimental design parameters. We will describe our statistical model, and assess the factors driving prediction uncertainty for current experiments. The balance between uncertainty sources such as shot-to-shot variation, incomplete experimental constraints on physics parameters, and interpolation to new experimental parameters suggests new experimental and simulation studies that reduce prediction uncertainty; we will present some of these and discuss their implications for ICF implosion studies.
LLNL-ABS-780203. Prepared by LLNL under Contract DE-AC52-07NA27344