This minisymposium is devoted to recent developments in methodologies, applications, and lessons-learned in estimating physical parameters in complex physical systems. Mathematical models of complex real-world processes have been used to model physical processes of interest in science, engineering, medicine, and business. Computer models (or simulators) often require a set of inputs (some known and specified, others unknown) to generate predictions for physical processes of interest. Physical observations and simulator output allow us to infer both the unknown inputs and the physical process.
Inference about the physical process in the presence of the high-volume output and model uncertainty is challenging, since appropriate uncertainty assessment is the key success to understand the physical process of interest. In the calibration context, the discrepancy between reality and simulators are difficulty to model. In the inverse problem setting, the high-dimensional input space can make the Bayesian inverse computationally challenging.
Bringing selected leading researchers, this minisymposium has been broken into two sessions: calibration (Part I) and inverse problem (Part II). It includes speakers from Europe and North America and is diverse in experience level from fresh PhD graduates to mid-career researchers with backgrounds in statistics, applied mathematics, and engineering. We hope this minisymposium will serve as a nexus to exchange ideas to address this UQ problem.
14:00
History Matching for Physical and Biological Systems
Peter Challenor | University of Exeter | United Kingdom
Show details
Author:
Peter Challenor | University of Exeter | United Kingdom
History matching is a method of learning about numerical model inputs from observations. Unlike Bayesian methods history matching does not attempt to find the posterior of the model inputs. Rather we exclude sets of inputs that are implausible given the observations. This is done by setting up an implausibility measure. This is the scaled distance between the observations and the expected value of an emulator. The scaling term is comprised of three variances. The first is the variance of the emulator at this setting of the model inputs. We know this and with every successive wave of the history match we concentrate new model runs in the Not Ruled Out Yet (NROY) space; building a new emulator with each wave. At each wave the emulator has a reduced variance resulting in a smaller NROY. The second term is the discrepancy. This is the distance between the model at its ‘best’ set of inputs and reality. The final term is the data variance. For a physical system it is reasonable to use the measurement error. But if we think about an biological system this is not the case, this term includes the variability between cases. We can exploit this by looking at hierarchical error structures. The variability in the population, for example, can be split into within and between genders. We discuss these issues and whether biological models should have point estimates at all.
14:30
- CANCELED - Multilevel Emulation and History Matching of EAGLE: an expensive stochastic hydrodynamical Galaxy formation simulation
Ian Vernon | Durham University | United Kingdom
Show details
Author:
Ian Vernon | Durham University | United Kingdom
EAGLE is one of the most complex models of galaxy formation yet run, with typical runtimes of order 10 million CPU hours. EAGLE has faster but physically different versions available, each of which possess increasing levels of stochasticity. We describe how to construct multilevel Bayes linear emulators that contain substantial physical structure which can be used to mimic the four levels of the stochastic EAGLE model. The physical structure facilitates the incorporation of meaningful scientific judgements regarding the physical differences between the four EAGLE levels, essential for successful emulation. This multilevel emulation structure will then be embedded within a History Match analysis to solve the inverse problem. History Matching is an appropriate and efficient approach to finding the set of all input parameters that lead to acceptable matches between model output and observed data, which proceeds by iteratively discarding regions of the input space based on carefully chosen subsets of the outputs and observed data, and which incorporates all major sources of uncertainty, including that derived from the difference between the computer model and reality itself.
15:00
Bayesian emulation and calibration of a stochastic computer model geotechnical asset deterioration
Darren Wilkinson | Newcastle University | United Kingdom
Show details
Author:
Darren Wilkinson | Newcastle University | United Kingdom
Geotechnical assets such as the cuttings and embankments that form an integral part of rail and road networks deteriorate slowly over a period of many decades. Engineers have developed complex computer codes to model this process, which is driven by seasonal weather patterns. Using stochastic weather generators based on forecasted climate leads to a stochastic simulator containing uncertain parameters which need to be calibrated against available data. This talk will discuss the practical issues that arise in the context of Bayesian emulation and calibration of such models.
15:30
Multifidelity calibration and discrepancy analysis of density functional theory models
Michael Grosskopf | Los Alamos National Laboratory | United States
Show details
Author:
Michael Grosskopf | Los Alamos National Laboratory | United States
Nuclear Energy Density Functional (EDF) theory has been provides a tool for modeling the properties of nuclei directly from quantum energy potentials and provides insight on applications from stellar phenomena to energy applications. The models are able to simulate the properties of elements across the nuclide table, drive the search for properties of unobserved isotopes, and understand reaction processes using the underlying quantum principles. In this work, we work with multiple versions of the UNEDF model to compare parameter estimates with uncertainty and assess the ability of the models to capture observed values. Additionally, the models can be thought of as multiple levels of fidelity and thus can be combined to one larger calibration model with parameters estimated jointly. We also present work on analysis of a structured discrepancy model with the goal of improving the ability to extrapolate to new nuclei properties beyond the observed and further understanding of the ways in which the simulator can be improved.