14:00

A sparse control approach to Optimal Design of Experiments for PDEs

Daniel Walter | Technische Universität München | Germany
» Show details
**Author:**

Daniel Walter | Technische Universität München | Germany

We address the problem of identifying parameters

of a physical process, described by a system of elliptic partial

differential equations, from

previously collected experimental data. In many applications, data is

scarcely available and polluted by measurement errors. To obtain

reliable estimates of the parameters, this uncertainty in the

measurements has to be taken

into account in the design of the underlying experiment. To this

purpose, we

formulate an optimal design problem based on the asymptotic covariance

matrix of a least-squares estimator for the parameters, which depends on

the

number, position, and quality of the measurement sensors. The

measurement setup is

modeled by a sum of Dirac-delta functions on the spatial

experimental domain, which corresponds to a finite number of pointwise

measurements of the PDE solutions. We discuss optimality conditions, the discrete approximation as well as the efficient algorithmic solution of the optimal design problem.

14:20

Generalized Information Reuse for Optimization Under Uncertainty of Non-Sample Average Metrics

Laurence Cook | University of Cambridge | United Kingdom
» Show details
**Authors:**

Laurence Cook | University of Cambridge | United Kingdom

PhD Jerome Jarrett | University of Cambridge | United Kingdom

Prof. Karen Willcox | Massachusetts Institute of Technology | United States

The importance of accounting for uncertainties in the design of engineering systems is becoming increasingly recognized. This requires the behavior of quantities of interest under uncertainty to be quantified at each optimization iteration - a computationally expensive endeavor. When the number of uncertainties is high, Monte-Carlo sampling based uncertainty quantification becomes the most feasible approach, and multi-fidelity methods can significantly reduce the computational cost of such an approach. Existing multi-fidelity methods in this context rely on being able to express the optimization objective function as a sample average. This is appropriate in many cases (for example, if the objective is based on statistical moments); however, there are metrics for optimization under uncertainty that are desirable to use but that cannot be expressed as sample averages, and so cannot directly make use of existing multi-fidelity methods. In this work we consider an ``information reuse" multi-fidelity method that treats information computed at previous optimization iterations as lower-fidelity models. We generalize the method to be applicable to non-sample average metrics. The extension makes use of bootstrapping to estimate not only the error of estimators of the metrics but also the correlation between estimators at different fidelities. In this work we outline this generalized information reuse method, we propose solutions to challenges in implementing the approach numerically, and we apply it to algebraic test problems and an acoustic horn design problem where its performance is compared with naive Monte-Carlo sampling. We consider the horsetail matching metric and quantile function as the non-sample average metrics, and compare optimizations using these and generalized information reuse with optimizations using statistical moments and standard information reuse.

14:40

Robust optimization of PDE constrained systems using a multilevel Monte Carlo method

Andreas Van Barel | KU Leuven | Belgium
» Show details
**Authors:**

Andreas Van Barel | KU Leuven | Belgium

Prof. Stefan Vandewalle | KU Leuven | Belgium

We consider PDE constrained optimization problems where the partial differential equation has uncertain coefficients modeled by means of random variables or random fields. The goal of the optimization is to determine a robust optimum, i.e., an optimum that is satisfactory in a broad parameter range, and as insensitive as possible to parameter uncertainties. Many goal functions can be defined that attempt to solve this problem. They vary in computational cost and in the robustness of the solution. In this talk, we focus on optimizing the expected value of a tracking type objective with an additional penalty on the variance of the state. The gradient and Hessian corresponding to such a cost functional also contain expected value operators. Since the stochastic space is often high dimensional, a multilevel (quasi-) Monte Carlo method is presented to efficiently calculate the gradient and the Hessian. If one is careful, the resulting estimated quantities are the exact gradient and Hessian of the estimated cost functional, which is important in practice for some optimization algorithms.

The convergence behavior is illustrated using a gradient and a Hessian based optimization method for a model elliptic diffusion problem with lognormal diffusion coefficient and optionally an additional nonlinear reaction term. The evolution of the variances on each of the levels during the optimization procedure leads to a practical strategy for determining how many and which samples to use. We also investigate the necessary tolerances on the mean squared error of the estimated quantities. Finally, a practical algorithm is presented and tested on a problem with a large number of optimization variables and a large number of uncertainties.

15:00

Derivative-Free Stochastic Constrained Optimization using Gaussian Processes, with Application to a Scramjet

Friedrich Menhorn | Technical University of Munich | Germany
» Show details
**Authors:**

Friedrich Menhorn | Technical University of Munich | Germany

Prof. Youssef Marzouk | Massachusetts Institute of Technology | United States

We present recent additions to SNOWPAC (Stochastic Nonlinear Optimization With Path-Augmented Constraints) [Augustin, Marzouk, 2017], a method for stochastic nonlinear constrained derivative-free optimization using a trust region approach and Gaussian processes for noise reduction. SNOWPAC operates on robust optimization problems where sample estimates are used to evaluate the robustness measure. Since the black box model evaluations underlying the sample estimates are assumed to be expensive, SNOWPAC employs Gaussian process regression to mitigate the impact of noise resulting from small sample sizes. The cost of Gaussian process regression, however, grows cubically with the number of training data. Thus, we propose approximate Gaussian process methods in SNOWPAC to handle large numbers of optimization iterations. Three different methods are implemented and rigorously tested: Subset of Regressors, Deterministic Training Conditional and Fully Independent Training Conditional. For this, we also propose a new type of regression benchmark derived from the Hock-Schittkowski benchmark [Schittkowski, 2008]. Additionally, we directly compare the methods in SNOWPAC. Finally, we apply SNOWPAC to a complex and expensive SCRAMJET simulation to show its performance and viability.

15:20

Evaluation of Different Robustness Measures for Crashworthiness Problems

Sebastian Thelemann | TU München | Germany
» Show details
**Authors:**

Sebastian Thelemann | TU München | Germany

Pablo Lozano | TU München | Germany

Michael Pabst | TU München | Germany

Prof. Fabian Duddeck | TU München | Germany

Crashworthiness aims to reduce the number of fatalities and serious injuries in traffic by focusing on occupant protection via new and improved vehicle design and safety countermeasures. Due to uncertainties, there is often a trade-off between improvement of the performance under nominal conditions and robustness. Uncertainties can be categorized into two groups: aleatoric and epistemic uncertainties. Aleatoric uncertainties describe random variations of physical properties of a system or a product, e.g. manufacturing-based tolerances of material and geometry parameters. The other group arises due to lack of knowledge or incomplete and limited valid information, e.g. higher degrees of freedom in the concept phase of products, missing constraints, or simplification of models.

In general, robustness is a measure of system insensitivity to inputs variation and thus should be integrated in an optimization process. One way for evaluating the robustness of crash loaded structures as basis for more detailed investigations is the application of the “robustness index” described by Andricevic. This index is a scalar quantity derived by a weighting process of four factors: the degree of achievement of objectives, the objective’s variation around the mean value, the range of variation as well as the distance of not acceptable single results with respect to design and dimensioning limits.

A more complex and time consuming process is to conduct a robustness analysis by analyzing all scatter diagrams and histograms of the derived samples, e.g. via coefficient of determination, principal component analysis, correlation matrix and correlation coefficients, etc. and the characteristics of the deformation behavior, e.g. system instabilities like buckling as well as using regulatory limits. To reduce complexity, i.e. number of relevant variables, a suitable methodical ansatz will be presented to evaluate sensitivities and therefore robustness, interpret sensitivity curves and thus derive a basis for describing robust system behavior.