[ Moved from MW HS 2235 ]
History matching is a way of inverse modelling, or calibrating/tuning, the inputs of a complex numerical model given observations on the outputs. History matching is very different to ways of performing a Bayesian calibration. For example, the result is not a posterior distribution on the model inputs, but a set of model input points that are not implausible points given the data. It is not probabilistic. The idea is simple. A series of waves of model runs is carried out. At each wave the scaled distance (the implausibility measure) between the observations and the expected value of an emulator (either a Gaussian or a second order process) of the model for all inputs is calculated. If this distance is too large the set of inputs is ruled implausible. The scaling consists of three components:- the emulator variance (known but input dependent), the observation variance (so poor observations are downweighted compared to more accurate ones) and a variance that measures the discrepancy between the model and the real world. After the first wave a new wave of model runs is carried out in the Not Ruled Out Yet (NROY) space. A new emulator is derived and new implausibilities calculated. At each wave the emulator becomes a better fit to the model so the NROY space is progressively reduced. This mini-symposium is concerned with the application and extension of history matching to a variety of applications.
08:30
Efficient calibration for spatio-temporal models using basis methods
James Salter | University of Exeter | United Kingdom
Show details
Authors:
James Salter | University of Exeter | United Kingdom
Danny Williamson | University of Exeter | United Kingdom
Calibration of expensive computer models can be approached via history matching. Many such models (e.g., in climate, engineering) have high-dimensional spatial and/or temporal outputs, all of which we may wish to predict for unseen regions of parameter space, and compare to real-world observations, rather than only considering summaries of the output.
In this talk, we demonstrate how to history match high-dimensional output fields efficiently and effectively, comparing emulating every output individually to using a basis method, e.g. SVD, basis rotation. We show that the high-dimensional implausibility measure for the full output field can be calculated efficiently for any input setting, given a single inversion of a large variance matrix, emulators for basis coefficients, and an appropriate, consistent choice of projection norm. We show that basis methods offer computational savings at both the emulation and history matching stages, whilst also potentially improving the accuracy of the emulators and providing more physically-coherent predictions.
09:00
- CANCELED - History matching by implausibility emulation
Michael Goldstein | Durham University | United Kingdom
Show details
Authors:
Michael Goldstein | Durham University | United Kingdom
Ben Lopez | Durham University | United Kingdom
Camila Caiado | Durham University | United Kingdom
When history matching for complex output structures, it can
be difficult to construct and exploit appropriate implausibility
measures. We will discuss a general approach based on the assessment
and emulation of implausibility thresholds. The method will be
illustrated with a real-time patient monitoring example to support the
diagnosis of atrial-fibrillation on a cardiac intensive care unit.
09:30
Constructing personalised tissue parameter maps of human left atrium from clinical measurements
Samuel Coveney | University of Sheffield | United Kingdom
Show details
Author:
Samuel Coveney | University of Sheffield | United Kingdom
Models of electrical activity in heart tissue are becoming widely used research tools. To be deployed clinically, models need to represent the heart of an individual patient. We focus on personalised models of electrical activity in the left atrium, which have the potential to improve interventional therapies for atrial fibrillation, such as radio-frequency ablation, by predicting outcomes for variations of the therapy.
Model personalisation is based on electrophysiological measurements and imaging data, which must be collected in the clinic and then processed offline to obtain quantities that link to model simulations. Plausible model parameters can then be obtained that are consistent with observations in a specific patient. The data are typically noisy and sparse, and to account for these uncertainties, as well as structural discrepancy in the model, a probabilistic approach is required and it is necessary to create a population of personalized models. Using real patient data consisting of left atrial geometry and voltage-time signals collected from inside of the left atrium, we calculate probabilistic local activation time maps and conduction velocity maps over the atrium. Since electrical activation is linked to local tissue properties, we compare these maps to model simulations in order to obtain sets of parameters consistent with the observations from the patients.We create a cohort of parameter maps which can be used for personalized simulations of electrical activity
10:00
Bayesian Optimal Design for iterative refocussing
Victoria Volodina | Alan Turing Institute | United Kingdom
Show details
Authors:
Victoria Volodina | Alan Turing Institute | United Kingdom
Danny Williamson | University of Exeter | United Kingdom
History matching is most effective when it is performed in waves, i.e. refocussing steps (Williamson et al., 2017). At each wave a new ensemble is obtained within the current Not Ruled Out Yet space (NROY), the emulator is updated and the procedure of cutting down the input space is performed again.
Generating a design for each wave is a challenging problem due to the unusual shapes of NROY space. A number of approaches (Williamson and Vernon, 2013; Gong et al., 2016, Andrianakis et al., 2017) are focused on obtaining space-filling design over the NROY space. In this talk we present a new decision-theoretic method for a design problem for iterative refocussing. We employ a Bayesian experimental design and specify a loss function that compares a volume of NROY space obtained with an updated emulator to the volume of `true' NROY space obtained using a `perfect' emulator. The derived expected loss function contains three independent and interpretable terms. In this talk we compare the effect of proposed Bayesian Optimal Design to space-filling design approaches on the iterative refocussing performed on simulation studies.
We recognise that adopted Bayesian experimental design involves an expensive optimization problem. Our proposed criterion also could be used to investigate and rank a range of candidate designs for iterative refocussing. In this talk we demonstrate the mathematical justification provided by our Bayesian Design Criterion for each design candidate.