In many important inverse problems and engineering computations -e.g. numerical weather prediction, medical tomography, reliability analysis- data are related to parameters of interest through the solution of an ordinary or partial differential equation (DE). To proceed with computation, the DE must be discretised and solved through linear algebra methods. However, such discretisation introduces bias into parameter estimates and can in turn cause conclusions to be over-confident. Probabilistic numerical methods for DEs and linear algebra aim to provide uncertainty quantification in the solution space of the DE to properly account for the fact that the governing equations have been altered through discretisation. In contrast to the worst-case error bounds of classical numerical analysis, the stochasticity in DEs and linear solvers serves as the carrier of uncertainty about discretisation error and its impact. This statistical notion of discretisation uncertainty can then be more easily propagated to later inferences, e.g. in a Bayesian inverse problem. Several such probabilistic numerical methods have been developed in recent years, and the connections and distinctions between these methods are starting to be modelled and understood. In particular, an important challenge is to ensure that such uncertainty estimates are well-calibrated. This minisymposium will examine recent advances in both the development and implementation of probabilistic numerical methods in general.
16:30
On the role of exponential integrability of probabilistic integrators for approximate Bayesian inference
Han Cheng Lie | Universität Potsdam | Germany
Show details
Author:
Han Cheng Lie | Universität Potsdam | Germany
In designing numerical methods for applications in the sciences and in engineering, the technique of randomisation may be used to account for uncertainty or to reduce complexity, e.g. due to the high dimensionality of data. For example, Gaussian processes have been used extensively as computationally efficient models for complex physical systems. In the field of probabilistic numerics, Gaussian random variables were used in probabilistic integrators for ordinary differential equations, in order to account for discretisation error (Conrad et al., Stat. Comput., 2017). However, the consequences of randomisation for Bayesian inference was only recently studied for the Gaussian process case by Stuart and Teckentrup (Math. Comp., 2017). The purpose of this talk is to highlight the role of exponential integrability as a key property for probabilistic integrators, in particular when such integrators are used for Bayesian inference.
17:00
BVPs, computational pipelines and a probabilistic numerics GOODE
Michael Schober | Max Planck Institute for Intelligent Systems | Germany
Show details
Author:
Michael Schober | Max Planck Institute for Intelligent Systems | Germany
In the first part of this talk, we argue for the study of composite probabilistic numerical methods in the sense of Cockayne et al.'s computational pipelines. These are of particulate importance as many traditional numerical algorithms exist that reduce subproblems to calling nested numerical algorithms. We demonstrate this for the case of boundary value problems (BVPs) which can be formulated as a root-finding problem whose iterations require the repeated solution of related linear systems. In the second part of this talk, we present GOODE, a probabilistic numerical algorithm for the solution of nonlinear BVPs. GOODE extends earlier work for linear problems by reformulating the iterative quasilinearization process in the context of GP regression. GOODE (as proof-of-concept version) performs on par with non-probabilistic state-of-the-art codes for an established benchmark of test problems and is available as open source software implementation in Matlab. As GOODE makes no use of the probabilistic description of its subtasks, we discuss how some ideas from the computational pipelines might be used in future work.
17:30
Probabilistic solutions to ordinary differential equations as non-linear Bayesian filtering and smoothing: Gaussian approximations
Filip Tronarp | EEA Aalto University | Finland
Show details
Author:
Filip Tronarp | EEA Aalto University | Finland
Numerical solutions to ordinary differential equations can be posed as non-linear Bayesian filtering problems by defining a Gaussian state-space prior, which is measured by the difference between the derivative and the vector field evaluated at the candidate solution. This problem can be solved by fairly standard approaches in signal processing such as Gaussian filtering and smoothing, and sequential Monte Carlo. In this talk we discuss some of the Gaussian filtering and smoothing approaches leading to explicit, semi-implcitit, and implicit schemes. Some connections to square norm minimisation in reproducing kernel Hilbert spaces subject to nonlinear functional constraints is also given.
18:00
Approximate Bayesian solutions to nonlinear differential equations
Junyang Wang | Newcastle University | United Kingdom
Show details
Author:
Junyang Wang | Newcastle University | United Kingdom
The interpretation of numerical algorithms as statistical estimation methods has led to the development of ‘probabilistic numerical methods’ (PNMs), which take in a probability distribution over its input and return a probability distribution over its outputs. PNMs are therefore like Bayesian analogues of traditional numerical algorithms that produces point estimates of some desired quantity. It is then natural to ask whether genuinely Bayesian PNMs can be developed for the numerical solution of differential equations. In recent work it was argued that exact Bayesian PNMs can be developed for a limited class of differential equations that exhibits certain Lie symmetries. To achieve greater generality, we propose approximately Bayesian PNMs that are applicable to a much wider range of differential equations, including nonlinear PDEs, and are still in a rigorous sense close to exact Bayesian PNMs.