The overwhelming majority of modern applications in the natural sciences, engineering, and beyond require both statistical estimation to accurately quantify the behavior of unknown distributed parameters in complex systems as well as a means of making optimal decisions that are resilient to this uncertainty. In this minisymposium, we aim to connect researchers working in optimization of complex systems under uncertainty such as equilibrium problems, differential algebraic equations, and partial differential equations, with statisticians working in variational statistics, infinite-dimensional statistical estimation, and optimum experimental design.
14:00
Variance Reduction schemes for stochastic Variational inequalities
Mathias Staudigl | Maastricht University | Netherlands
Show details
Author:
Mathias Staudigl | Maastricht University | Netherlands
We develop a new stochastic algorithm with variance reduction for solving pseudo-monotone stochastic variational inequalities. Our method builds on Tseng's forward-backward-forward algorithm, which is known in the deterministic literature to be a valuable alternative to Korpelevich's extragradient method when solving variational inequalities over a convex and closed set governed by pseudo-monotone, Lipschitz continuous operators. The main computational advantage of Tseng's algorithm is that it relies only on a single projection step and two independent queries of a stochastic oracle. Our algorithm incorporates a variance reduction mechanism and leads to almost sure (a.s.) convergence to an optimal solution. To the best of our knowledge, this is the first stochastic look-ahead algorithm achieving this by using only a single projection at each iteration. In this talk we will also discuss extensions to the distributed computation of generalized Nash equilibria subject to stochastic uncertainty.
14:30
Performance of Multilevel Monte Carlo techniques for robust PDE constrained optimization
Andreas Van Barel | KU Leuven | Belgium
Show details
Author:
Andreas Van Barel | KU Leuven | Belgium
Gradients (or Hessian vector products) for robust PDE constrained optimization problems can be evaluated using multilevel Monte Carlo (MLMC) methods. Such Monte Carlo type methods are of particular interest for dealing with high dimensional uncertainties. An existing optimization algorithm, ideally expanded to intelligently adapt the requested precision of the MLMC evaluations, can then be used to obtain the desired solution. In this talk, we test the performance of such a method for several (model) problems, all of tracking type. In particular, we consider problems constrained by the Laplace equation, the heat equation and the viscous Burgers' equation. Additionally, we also investigate the behavior of an MG/OPT algorithm for these problems.
15:00
Stochastic Proximal Gradient Method in Hilbert Spaces
Caroline Geiersbach | University of Vienna | Austria
Show details
Author:
Caroline Geiersbach | University of Vienna | Austria
For finite-dimensional problems, stochastic approximation methods have long been used to solve stochastic optimization problems. These are a class of iterative methods dating back to Robbins and Monro (1951) that use noisy estimates of function values in place of the true (deterministic) function values, which may be expensive or even intractable to compute. The strengths of these methods are the low computational complexity, low memory requirements, and ease of implementation, but their performance is highly dependent on the choice of the step-size.
Motivated by applications to PDE (partial differential equation) constrained optimization under uncertainty, we examine convergence of the stochastic proximal gradient method for general problems in Hilbert spaces. We assume that the problem can be formulated as the sum of a nonconvex expectation function and a nonsmooth convex function. We show convergence in probability of the random sequence generated by the algorithm to stationary points of the objective function. We demonstrate the algorithm on a model problem constrained by a random PDE, where input terms and coefficients are subject to uncertainty. We additionally show an application to a structural topology optimization problem, where shapes are represented by phase fields, and the shape is subjected to random forcing.
15:30
A Quasi-Monte Carlo Method for PDE-Constrained Optimization under Uncertainty
Philipp Guth | University of Mannheim | Germany
Show details
Authors:
Philipp Guth | University of Mannheim | Germany
Vesa Kaarnioja | UNSW Sydney | Australia
Claudia Schillings | University of Mannheim | Germany
Frances Kuo | UNSW Sydney | Australia
Ian Sloan | UNSW Sydney | Australia
We consider an optimization problem constrained by a partial differential equation (PDE) equipped with uncertain input coefficients. Based on a risk measure, such as the expected value, a deterministic reformulation of the problem can be stated. We parametrize the uncertain coefficient in the PDE by a countably infinite number of terms via a Karhunen-Loeve expansion and consider the expected value as an infinite-dimensional integral in the corresponding parameter space. For the approximation of the expected value we present a quasi-Monte Carlo rule with error bounds independent on the number of uncertain variables, which achieves faster convergence rates compared to Monte Carlo methods in case of smooth (w.r.t. the uncertain variables) integrands. In addition, quasi-Monte Carlo methods preserve the convexity structure of the optimization problem due to their nonnegative (equal) quadrature weights. Numerical experiments confirming our theoretical convergence results will be presented.