[ Moved from MW HS 2235 ]
History matching is a way of inverse modelling, or calibrating/tuning, the inputs of a complex numerical model given observations on the outputs. History matching is very different to ways of performing a Bayesian calibration. For example, the result is not a posterior distribution on the model inputs, but a set of model input points that are not implausible points given the data. It is not probabilistic. The idea is simple. A series of waves of model runs is carried out. At each wave the scaled distance (the implausibility measure) between the observations and the expected value of an emulator (either a Gaussian or a second order process) of the model for all inputs is calculated. If this distance is too large the set of inputs is ruled implausible. The scaling consists of three components:- the emulator variance (known but input dependent), the observation variance (so poor observations are downweighted compared to more accurate ones) and a variance that measures the discrepancy between the model and the real world. After the first wave a new wave of model runs is carried out in the Not Ruled Out Yet (NROY) space. A new emulator is derived and new implausibilities calculated. At each wave the emulator becomes a better fit to the model so the NROY space is progressively reduced. This mini-symposium is concerned with the application and extension of history matching to a variety of applications.
14:00
Design of Physical Experiments for History Matching
Samuel Jackson | University of Southampton | United Kingdom
Show details
Author:
Samuel Jackson | University of Southampton | United Kingdom
History matching aims to find the set of all non-implausible inputs to a computer model, that is, those which are not inconsistent with observed data, given all the sources of uncertainty associated with the model and the measurements. The progress of a history match is often measured in terms of the proportion of the initial input space classed as implausible, although other criteria can be used, such as the reduction in the variance of scientifically important parameters. Analysis of such quantitative features of the non-implausible set is informative for answering current questions about the real-world quantities associated with the input parameters and the links between them. We therefore quantify the expected information gain resulting from performing possible future physical experiments in terms of history matching criteria related to scientific questions of interest, thus allowing the most relevant and informative experiments to be performed. We demonstrate our techniques on an important systems biology model of hormonal crosstalk in the roots of an Arabidopsis plant.
14:30
Enhancing Uncertainty Quantification using machine learning approaches.
Wenzhe Xu | University of Exeter | United Kingdom
Show details
Authors:
Wenzhe Xu | University of Exeter | United Kingdom
Danny Williamson | University of Exeter | United Kingdom
Uncertainties about model parameters is one of the main sources of uncertainty in complex computer models. Parameter calibration is a necessary process that attempts to give values of the model parameters that allows the model to give the best representation of key observations so we can trust its predictions. When we try to calibrate computer models with large spatial output, the model has to be able to represent the key features in the observations, even if they may not happen in exactly the same place in reality. This is not a capability of existing statistical methods for calibrating at present which are only good at finding stronger or weaker signals in fixed locations.
Our idea is based on the pre-image reconstruction of kernel PCA. We do a calibration on the feature space, when the simulator output consists of the observation, the expectation of the squared distance between mapped observation and simulator output reconstruction on feature space should be zero. This squared distance is a distribution when the emulator is involved, but it is also parallel to the implausibility function used for ruling out space in history matching. Our idea is to use this distance to perform calibration using the kernel directly, so that we can search model output space for key features. We apply our method to tuning the time evolution of vertical clouds in the CNRM-Arpegge Climate model.
15:00
Future Proofing a Building Using History Matching Inspired Level Set Techniques
Evan Baker | University of Exeter | United Kingdom
Show details
Authors:
Evan Baker | University of Exeter | United Kingdom
Peter Challenor | University of Exeter | United Kingdom
Matt Eames | University of Exeter | United Kingdom
History Matching is a technique used to calibrate complex computer models. It does this by iteratively eliminating regions of input space where observed data is clearly at odds with simulated output. Key to this technique is the construction of emulators, which provide probabilistic predictions of future simulations, avoiding excessive runs of the computationally expensive computer model.
In this work, we adapt the History Matching framework to tackle the problem of level set estimation; that is, finding the region of input space where the output is below (or above) some threshold.
The developed methodology is heavily motivated by a specific case study: how can one design a building that will be sufficiently protected against overheating and sufficiently energy efficient, whilst considering the expected increases in temperature due to climate change?
15:30
History matching a land surface simulator
Doug McNeall | Met Office and University of Exeter | United Kingdom
Show details
Authors:
Doug McNeall | Met Office and University of Exeter | United Kingdom
Andy Wiltshire | Met office | United Kingdom
Richard Betts | Met Office and University of Exeter | United Kingdom
The land surface is a critical component of the climate system. Accurate simulations will help planners develop strategies for mitigation and adaptation of climate change on the land surface, where the majority of the human impacts of climate change will be felt. The community-developed land surface model JULES contains a number of parameters, the values of which are unknown but materially affect the simulation of the land surface. We use History Matching to find good values of these parameters and so explore plausible scenarios of the future of the Earth’s land surface. We use a large perturbed-parameter ensemble of JULES forced by historical climate observations to train a set of Gaussian process emulators of important land surface variables. We exclude any parameter set where the historical land surface simulation does not match observations, even allowing for uncertainties in the observations and in the model. We offer decision makers a set of land surface futures, conditional on anthropogenic emission and land surface change scenarios, and constrained by observations of the past. Our analysis offers observers the potential value of an observation for constraining the future of the land surface, helping them target their observational campaigns. We offer model developers detailed information about important differences between JULES and the true system, helping them prioritise targets for model development, and minimise model errors.