In many important inverse problems and engineering computations -e.g. numerical weather prediction, medical tomography, reliability analysis- data are related to parameters of interest through the solution of an ordinary or partial differential equation (DE). To proceed with computation, the DE must be discretised and solved through linear algebra methods. However, such discretisation introduces bias into parameter estimates and can in turn cause conclusions to be over-confident. Probabilistic numerical methods for DEs and linear algebra aim to provide uncertainty quantification in the solution space of the DE to properly account for the fact that the governing equations have been altered through discretisation. In contrast to the worst-case error bounds of classical numerical analysis, the stochasticity in DEs and linear solvers serves as the carrier of uncertainty about discretisation error and its impact. This statistical notion of discretisation uncertainty can then be more easily propagated to later inferences, e.g. in a Bayesian inverse problem. Several such probabilistic numerical methods have been developed in recent years, and the connections and distinctions between these methods are starting to be modelled and understood. In particular, an important challenge is to ensure that such uncertainty estimates are well-calibrated. This minisymposium will examine recent advances in both the development and implementation of probabilistic numerical methods in general.
Prior Distributions and Test Statistics for the Bayesian Conjugate Gradient Method
Tim Reid | North Carolina State University | United States
The Bayesian Conjugate Gradient method (BayesCG) is a probabilistic numerical method for solving real symmetric positive definite systems of linear equations, with the goal of modeling the epistemic uncertainty in the solution. BayesCG takes as input a prior probability distribution over the solution, and outputs posterior distributions for the iterates, based on the search directions computed so far.
We investigate informative prior distributions, and present analytical and empirical results to show that the priors produce well-calibrated UQ while maintaining the convergence rate of CG. We also present test statistics for computing error estimates from the posterior.
A probabilistic view on sparse Cholesky factorization
Florian Schäfer | California Institute of Technology | United States
In this talk we will explore how probabilistic interpretations of the well-known Cholesky factorization motivate novel, fast, and simple algorithms for classical problems in numerical analysis. In the first part of the talk we show that many dense kernel matrices arising in statistics, machine learning, and scientific computing have almost sparse Cholesky factors, under a suitable elimination ordering that is motivated by the conditional near-independence of the associated Gaussian process. In the second part of the talk we show that the problem of finding the best sparse approximate (in KL-divergence) inverse-Cholesky factor of a given positive definite matrix has a simple closed form solution, recovering algorithms that were used previously (without knowledge of their KL-optimality) in the fields of spatial statistics and sparse approximate inverses. To the best of our knowledge, we attain the best asymptotic complexity for this problem. We conclude with a discussion of the practical advantages of the resulting algorithm, including its almost embarrassing parallelism.
Estimation of ordinary differential equation models with discretization error quantification
Takeru Matsuda | Department of Mathematical Informatics, The University of Tokyo | Japan
We consider estimation of ordinary differential equation (ODE) models from noisy observations. For this problem, one conventional approach is to fit numerical solutions (e.g., Euler, Runge–Kutta) of ODEs to data. However, such a method does not account for the discretization error in numerical solutions and has limited estimation accuracy. In this study, we develop an estimation method that quantifies the discretization error based on data. The key idea is to model the discretization error as random variables and estimate their variance simultaneously with the ODE parameter. The proposed method has the form of iteratively reweighted least squares, where the discretization error variance is updated with the isotonic regression algorithm and the ODE parameter is updated by solving a weighted least squares problem using the adjoint system. Experimental results demonstrate that the proposed method improves estimation accuracy by accounting for the discretization error in a data-driven manner.
Probabilistic rare-event simulation
Alejandro Diaz | University College London | United Kingdom
Estimating the probability of failure of a large physical system can be achieved through subset simulation. This MCMC algorithm, which is robust to high dimensionality, is designed for rare-event simulation. A crucial step involves the evaluation of a computer model of the physical system and the ranking of the model evaluations. This sequentially discards the input configurations whose output is less than a prescribed threshold which defines failure. However, in practice, the ranked output often comes from a computational pipeline, in which case a probabilistic numerical method can be used to quantify uncertainty due to computation. Thus, the problem becomes the inference based on ranking of distributional output, rather than numerical output. In this talk, we explore probabilistic rare-event simulation through ranking the output of a finite element solver using Bayesian conjugate gradient. We discuss how this impacts the robustness of Subset Simulation and its implications to model updating, robust design and Bayesian calibration.