The United States Department of Energy (DOE) Laboratory System grew out of the federally-funded scientific developments of World War II. Today, the national laboratories comprise one of the world’s largest scientific research systems. Tackling areas such as environmental modeling, precision medicine, and global security, the DOE laboratories are at the forefront of scientific innovation and, thus, have access to unique research problems, data sets, and facilities. This minisymposium will showcase the many applications and innovations in UQ stemming from the challenges of the national lab environment.
LLNL-ABS-791303. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
08:30
Data Compositing: Aligning Signals from Asynchronous Sources
Kathleen Schmidt | Lawrence Livermore National Laboratory | United States
Show details
Authors:
Kathleen Schmidt | Lawrence Livermore National Laboratory | United States
Jason Bernstein | Lawrence Livermore National Laboratory | United States
Ana Kupresanin | Lawrence Livermore National Laboratory | United States
Unique problems often arise for data scientists in the national laboratory system. In this talk, we will provide a brief introduction to the diverse problems addressed by the speakers in this session--hailing from 5 different laboratories--as well as highlighting a problem from Lawrence Livermore National Laboratory: data compositing. For supraexponential signals collected on small timescales, it is generally practical to record the signal using a series of detectors; however, since each detector records the signal on an individual rather than global time scale, the realigning, or compositing, of the data with respect to time is a challenging task. We present our optimization algorithm to achieve compositing in the presence of both noisy and missing data.
LLNL-ABS-791192. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
09:00
A Gibbs Blocking Scheme for Large-scale Deconvolution
Jesse Adams | Mission Support and Test Services | United States
Show details
Authors:
Aaron Luttman | Pacific Northwest National Laboratory | United States
Jesse Adams | Mission Support and Test Services | United States
Matthias Morzfeld | University of Arizona | United States
Marylesa Howard | Mission Support and Test Services | United States
Among the most significant challenges using Markov chain Monte Carlo (MCMC) methods for sampling from the posterior distributions of Bayesian models for inverse problems is the rate at which the sampling becomes computationally intractable, as a function of the number of parameters being estimated. In image deblurring, for example, there are many MCMC algorithms in the literature, but few attempt reconstructions for images larger than 512 x 512 pixels. In quantitative X-ray radiography, the images tend to be much larger and routine Bayesian models require estimating millions of parameters, so here we present a Gibbs sampler constructed via a blocking scheme that leads to a sparse and highly structured posterior precision matrix. Exploiting the matrix structure makes the sampler ``dimension-robust'', i.e. its mixing properties are nearly independent of the image size, which enables efficient sampling on even modest computational platforms. In this work we present the Gibbs sampling scheme and deconvolution results on X-ray radiographs from the US Department of Energy.
09:30
Quantifying Uncertainty on Mars
Kary Myers | Los Alamos National Laboratory | United States
Show details
Author:
Kary Myers | Los Alamos National Laboratory | United States
The Mars rover Curiosity was designed to study whether Mars ``ever [had] the right environmental conditions to support small life forms." As part of the mission, Curiosity carries an instrument called ChemCam, developed at Los Alamos National Laboratory with L'Institut de Recherche en Astrophysique et Planétologie, to determine the composition of the soil and rocks. ChemCam uses laser-induced breakdown spectroscopy (LIBS) for this task. In LIBS, a laser is fired at a sample to produce a high-temperature plasma. As the plasma cools, the sample emits a spectrum of light over a range of wavelengths that is recorded by a CCD camera. To use these measured spectra to estimate the chemical compositions of interest and compute uncertainties, we are leveraging a Los Alamos physics simulation code for plasmas called ATOMIC. We will describe our use of both computer model calibration and emulation to combine simulated and measured spectra in order to arrive at an estimate of the composition of soil and rocks on Mars.
10:00
Building Calibrated Deep Models via Uncertainty Matching with Auxiliary Interval Predictors
Jayaraman J. Thiagarajan | Lawrence Livermore National Laboratory | United States
Show details
Authors:
Jayaraman J. Thiagarajan | Lawrence Livermore National Laboratory | United States
Bindya Venkatesh | Arizona State University | United States
Prasanna Sattigeri | IBM Research AI | United States
Peer-Timo Bremer | Lawrence Livermore National Laboratory | United States
With rapid adoption of deep learning in high-regret applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties. While identifying all sources that account for the stochasticity of learned models is challenging, it is common to augment predictions with confidence intervals to convey the expected variations in a model’s behavior. In general, we require confidence intervals to be well-calibrated, reflect the true uncertainties, and to be sharp. However, most existing techniques for obtaining confidence intervals are known to produce unsatisfactory results in terms of at least one of those criteria. In this work, we conjecture that one can reliably build calibrated deep models by posing calibration as an auxiliary task and utilizing a novel uncertainty matching strategy. With applications in regression, time-series forecasting, and object localization, we show that our approach achieves significant improvements over existing uncertainty quantification methods in deep learning.