This mini-symposium aims at bringing together people working on kernel and other sampling-based approximation methods for high-dimensional problems, in particular, but not restricted to, quasi-Monte Carlo methods, and sparse grid methods. Kernel methods and the related Gaussian Process surrogate models are a powerful class of numerical methods, and they are often employed in problems arising in uncertainty quantification. Nonetheless, there is much to be explored in their theoretical analysis for UQ applications, which are often formulated as high-dimensional approximation or integration problems.
On the other hand, the theory and applicability of QMC and sparse grid approximation/integration techniques in high or infinite dimensional problems have seen considerable advances in the last years, yet are far from addressing all problems of interest in UQ.
The objective of this mini-symposium is to showcase the latest theoretical results and exchange ideas on sampling-based high-dimensional integration and approximation methods targeting UQ applications.
14:00
Accelerating least-squares by chained approximations
Bastian Bohn | University of Bonn | Germany
Show details
Authors:
Bastian Bohn | University of Bonn | Germany
Michael Griebel | University of Bonn | Germany
Jens Oettershagen | Deutsche Post DHL | Germany
Christian Rieger | University of Bonn | Germany
In this talk we are considering approximation systems based on
concatenated functions. In contrast to general deep neural networks, we
consider special function systems, which serve to accelerate the
convergence of scattered data least-squares approximations for sparse
grids and in reproducing kernel Hilbert spaces. For the first case, we
will introduce an automated optimal linear domain transformation, which
is specifically tailored towards the ANOVA structure of the underlying
function from which the samples have been drawn. For the second case, we
will show how a generalized representer theorem for kernel learning
serves as a first theoretical cornerstone for optimizing deep kernel
networks. Finally, we provide several numerical examples to illustrate
how these approximation systems can be used to tackle function
reconstruction and machine learning tasks.
14:30
- CANCELED - Kernel-based lattice point interpolation for uncertainty quantification using periodic random variables
Ian Sloan | University of New South Wales | Australia
Show details
Authors:
Ian Sloan | University of New South Wales | Australia
Vesa Kaarnioja | University of New South Wales | Australia
Yoshihito Kazashi | École polytechnique fédérale de Lausanne | Switzerland
Frances Kuo | University of New South Wales | Australia
Fabio Nobile | École polytechnique fédérale de Lausanne | Switzerland
In this talk I describe joint work with Vesa Kaarnioja and Frances Kuo at UNSW, and Fabio Nobile and Yoshihito Kazashi at EPF Lausanne, in which we use kernel approximation based on periodic random variables and lattice rules to approximate the solution of an elliptic PDE with random field as input. The method is shown to give good (even if not optimal) rates of convergence, independently of truncation dimension, while often being simple and cheap to implement. In more detail, the kernel in question is the reproducing kernel in an unanchored weighted Hilbert space, in which the weight for a given subset of the variables measures the importance of that subset. The method is easily feasible for product weights, but because product weights are not always justifiable, implementation with POD or SPOD weights is also discussed.
15:00
Kernel methods for parametric PDEs
Christian Rieger | University of Bonn | Germany
Show details
Author:
Christian Rieger | University of Bonn | Germany
In this talk, I will discuss recent progress on kernel based methods for parametric PDEs. Here, I will mostly focus on the task to numerically approximate a quantity of interest as function of the parameters. I will speak on recent progress of the error analysis of mesh-free methods in a high dimensional context. Moreover, I will present how the techniques for deriving error bounds can be employed both in the forward and inverse reconstruction problem in parametric PDEs. Finally, I will make some remarks on possible improvements in the numerical efficiency of such kernel based methods.
15:30
The kernel tensor-product multi-level method
Rüdiger Kempf | University of Bayreuth | Germany
Show details
Authors:
Rüdiger Kempf | University of Bayreuth | Germany
Holger Wendland | University of Bayreuth | Germany
In applications such as machine learning and uncertainty quantification, one of the main tasks is to reconstruct an unknown function from given data with data sites lying in a high dimensional domain. This task is usually even for relatively small domain dimensions numerically difficult. We propose a new reconstruction scheme by combining the well-known kernel multi-level technique in low dimensional domains with the anisotropic Smolyak algorithm, which allows us to derive a high dimensional interpolation scheme. This new method has significantly lower complexity than traditional high-dimensional interpolation schemes.
In this talk, I will give an introduction to the topics of kernel multi-level and anisotropic Smolyak algorithms before providing a convergence result for this new Kernel Tensor-Product Multi-Level method. If time permits, I will also give numerical examples.