The understanding and incorporation of data within models has become a vital component of applied mathematics. A fundamental one can ask is given noisy measurements of data, how to recover some unknown quantity of interest. Some examples of these fields include in- verse problems which is primarily concerned with parameter estimation and data assimilation for state estimation. Both fields have seen a considerable amount of attention due to recent advancements in terms of both classical and statistical approaches. In particular, this mini-symposium will consider particle methods for solving inverse problems with the help optimization tools as well as particle methods aiming to represent the posterior distribution in a bayesian point of view for inverse problems.
The motivation behind this mini-symposium is to bring together experts from both schools. This would provide a complimentary field to the mini-symposium where connections between both areas are currently being developed.
14:00
Ensemble-subspace formulation of an iterative smoother for solving inverse problems
Geir Evensen | NORCE and Nansen Environmental and Remote Sensing Center | Norway
Show details
Author:
Geir Evensen | NORCE and Nansen Environmental and Remote Sensing Center | Norway
Given a prior x and nonlinear model y = g(x), and measurements d of y. We define the inverse problem as solving for x, given the measurements, d, of y, and we note that this problem is normally highly underdetermined. Assume that we can represent some prior information about x by a Gaussian pdf and that the measurements have Gaussian errors. It is then possible to approximately sample from the Bayesian posterior pdf of the inverse problem by minimizing an ensemble of cost functions using Iterative Ensemble Smoothers (IES). The IES methods will work well in the weakly nonlinear case even for very high-dimensional problems. We will discuss the properties of IES methods and present a new ensemble-subspace formulation of the IES. We provide a simple derivation of the new algorithm and demonstrate its practical implementation and use. The ensemble sub-space algorithm is suitable for big-data assimilation of measurements with correlated errors. The computational cost is linear in both the dimension of the estimated parameters, x, and the number of measurements in d, also when the measurements have correlated errors. We will discuss and illustrate the properties, limitations, and practical use of the subspace IES algorithm in several examples of varying dimensionality and degree of nonlinearity.
14:30
Optimize, Learn, Sample
Alfredo Garbuno Inigo | California Institute of Technology | United States
Show details
Author:
Alfredo Garbuno Inigo | California Institute of Technology | United States
The calibration of complex models to data is both a challenge and an opportunity. It can be posed as an Inverse Problem. This work focuses on the interface of Ensemble Kalman Inversion (EKI) , Gaussian process emulation (GPE) and Monte Carlo-Markov Chain (MCMC) for the calibration of, and quantification of uncertainty in, parameters learned from data. The goal is to perform uncertainty quantification in predictions made from complex models, reflecting uncertainty in these parameters, with relatively few forward model evaluations. This is achieved by propagating approximate posterior samples obtained by judicious combination of ideas from EKI, GPE and MCMC. The strategy will be illustrated with idealized models related to climate modeling.
15:00
Linking data assimilation with reinforcement learning for individualized dosing policies
Corinna Maier | University of Potsdam | Germany
Show details
Author:
Corinna Maier | University of Potsdam | Germany
In clinical practice, therapy individualization based on therapeutic drug/biomarker monitoring offers the opportunity to significantly improve the efficacy and safety of drug treatments. Adaptive dosing strategies, however, require treatment decisions that are associated with uncertainties, 3 high risks, and sometimes time constraints. Mathematical models describing the pharmacokinetics and pharmacodynamics of the drug can be leveraged to support decision-making by predicting therapy outcomes. We present a continuous learning strategy that allows to improve and individualize the considered model as well as the dosing policy. For this, the model parameters and states are sequentially updated via a particle-based Bayesian data assimilation (DA) scheme when new patient-specific data is ob- served. This updated model can then be used in a model-based reinforcement learning (RL) step (planning) to tailor the dosing policy to the specific patient. An additional direct RL step allows to correct for potential structural model bias. We present this interwoven learning and individualization framework to control chemotherapy-induced neutropenia, the most frequent dose-limiting side effect for cytotoxic anticancer drugs.
15:30
Multilevel Monte Carlo methods for Bayesian inverse problems
Kody Law | University of Manchester | United Kingdom
Show details
Author:
Kody Law | University of Manchester | United Kingdom
This talk will concern the problem of inference when the posterior involves continuous models which require approximation before inference can be performed. For Monte Carlo inference approaches, the multilevel Monte Carlo method provides a way of optimally balancing discretization and sampling error on a hierarchy of approximation levels, such that cost is optimized. This talk will review 3 primary strategies which have been successfully employed to achieve optimal (or canonical) convergence rates in an inference context – in other words, faster convergence than i.i.d. sampling at the finest discretization level. Some of the specific resulting algorithms, and applications, will also be presented.