The central tenet of probabilistic numerics is that uncertainty due to discretisation can be formally modelled. For numerical cubature in the presence of a limited computational budget, it is natural to seek to exploit any contextual information that may be available on the integrand. Classical cubatures, such as spline-based or Gaussian cubatures, are able to exploit abstract mathematical information such as the number of continuous derivatives of the integrand. However, in situations where information of a more contextual and perhaps speculative nature is available to the analyst, the use of generic classical cubatures can be sub-optimal by failing to take this information into account. The language of probabilities provides one mechanism in which diverse contextual information about the integrand can be captured. Through the formalism of a stochastic process model, the analyst can encode both abstract mathematical information, such as the number of continuous derivatives of the integrand, and speciﬁc contextual information, such as the possibility of a local trend or a periodic component. This minisymposium focuses on the development of probabilistic methods for numerical cubature, showcasing approaches that are state-of-the-art in this nascent research field.
The Successes and Challenges of Automatic Bayesian Cubature
Fred J. Hickernell | Illinois Institute of Technology | United States
The promise of Bayesian cubature is that with reasonable prior information (or assumptions) about the integrand, one can construct an optimal approximation to the corresponding (multidimensional) integral and simultaneously a credible interval. Automatic Bayesian cubature uses increases the sample size until half-width of the credible interval is small enough. We discuss how to choose appropriate covariance kernels, estimate their hyper-parameters, and construct the credible intervals, all with reasonable computational effort. We also evaluate the performance of automatic Bayesian cubature on a variety of examples.
Gaussian processes and uncertainty quantification
Toni Karvonen | Aalto University | Finland
Gaussian processes (GPs) are among the most prevalent classes of priors used in probabilistic numerical algorithms. However, little is known about the quality of uncertainty quantification such an algorithm provides, for example in terms of the frequentist coverage of credible intervals. Intuitively, uncertainty quantification ought to be meaningful if the function of interest is a sample path of the GP prior. This talk discusses and provides new results on (a) sample path spaces of GPs and (b) asymptotic rates of decay of credible intervals for different function classes when the scale parameter of the covariance kernel is estimated from data. Our focus is in particular on Bayesian cubature and on kernels that induce Sobolev spaces.
Integrals of linearly constrained Gaussians
Alexandra Gessner | University of Tuebingen | Germany
Integrals of linearly constrained Gaussian densities are ubiquitous in machine learning and statistics yet they are notoriously hard to compute. To further complicate matters, the numerical values of such integrals are typically very small. We introduce an efficient black-box algorithm for the computation of such integrals. It is a tailored combination of the Holmes-Diaconis-Ross (HDR) method and elliptical slice sampling (ESS). Remarkably, this algorithm enables the direct computation of the logarithm of the integral value.
Adaptivity in Bayesian Cubature
Matthew Fisher | Newcastle University | United Kingdom
Bayesian Cubature is a popular probabilistic approach to numerical integration where a stochastic process model is posited for the integrand. After conditioning on data, the integral of the process provides us with a representation of our uncertainty for the numerical integral. Several approaches have been put forward to encode sequential adaptivity (i.e. dependence on previous integrand evaluations) into this method. However, existing proposals have been limited to either estimating the parameters of a stationary covariance model or focusing computational resources in spatial regions where large values are taken by the integrand. In contrast, classical adaptive methods are more direct and focus computational resources on regions where local error estimates are largest, thus potentially reducing the total number of integrand evaluations required to obtain a prescribed error tolerance. In this talk we will demonstrate that, unlike the case for classical non-adaptive cubature methods, there are not direct Bayesian analogues of classical adaptive cubature methods in general. Motivated by this result, we develop a novel adaptive Bayesian cubature method that demonstrates empirically similar behaviour to classical adaptive methods.