Linus Seelinger | Ohio State University | United States
One of the central tasks in scientific computing is to accurately approximate unknown target functions. This is typically done with the help of data — samples of the unknown functions. The emergence of Big Data presents both opportunities and challenges. On one hand, big data introduces more information about the unknowns and, in principle, allows us to create more accurate models. On the other hand, data storage and processing become highly challenging. In this talk, we present a set of sequential algorithms for function approximation with extraordinarily large data sets. The algorithms are of iterative nature
and involve only vector operations. They use one data sample at each step and can handle dynamic/stream data. We present both the numerical algorithms, which are easy to implement, as well as rigorous analysis for their theoretical foundation.
The Role of Stochastic Simulation in Mechanics of Materials at Multiple Scales
Prof. Lori Graham-Brady | Johns Hopkins University | United States
Design of materials requires a common understanding between those who make materials (processors), those who test and characterize materials (experimentalists) and those who analyze and predict material behavior (modelers). The intrinsic requirement is for the team to understand how smaller-scale features, or actors, lead to specific failure and deformation mechanisms, and how these mechanisms compete in determining larger-scale failure. This entire process is further challenged by the many uncertainties that pervade materials by design, from the randomly occurring microstructural actors that drive localization of failure, to the characterization and testing errors introduced by inexact measurements, to the environmental uncertainties that affect the formation of the material during processing. All of these challenges present significant opportunities for the stochastic mechanics community. Probabilistic evaluation of materials characterization data highlights microstructural features that may or may not be properly introduced into models based on that characterization. Assessment of the degree to which localized material response varies within a given structure requires novel stochastic simulation tools. Surrogate models offer a potential alternative to efficiently upscale micro-scale features into macro-scale structural models, based on spatial variations of physical parameters that are directly related to materials processing.
This talk will discuss these tools, highlighting a particular example in brittle materials under high-rate compression, but with some reference to other material classes and loading conditions.
A novel approach for risk-averse structural topology optimization under uncertainties is presented, which takes into account random material properties and random forces. For the distribution of material, a phase field approach is employed which allows for arbitrary topological changes during optimization. The state equation is assumed to be a high-dimensional PDE parametrized in a (finite) set of random variables. For the examined case, linearized elasticity with a parametric elasticity tensor is assumed. For practical purposes it is important to design structures which are also robust to infrequent events. Hence, instead of an optimization with respect to the expectation of the involved random fields, we employ the Conditional Value at Risk (CVaR) in the cost functional during the minimization procedure. Since the treatment of such high-dimensional problems is numerically challenging, a representation in the modern hierarchical tensor train format is proposed. In order to obtain such an efficient representation of the solution of the random state equation, a tensor completion algorithm is employed, which only requires the pointwise evaluation of solution realizations and can thus be considered nonintrusive. The new method is illustrated with numerical examples and compared with a classical Monte Carlo sampling approach.
Computationally efficient handling of successive analyses required in structural design optimization under uncertainty
Prof. Dimos Charmpis | University of Cyprus | Cyprus
Reliability-Based Design Optimization (RBDO) and Robust Design Optimization (RDO) are the two most common approaches applied to structural design optimization problems under uncertainty. RBDO is a single-objective optimization procedure that minimizes the weight or cost of the structure under a pre-specified constraint on the failure probability of the final design. RDO, on the other hand, is a multi-objective optimization approach, which focuses at minimizing both the weight/cost of the structure and the sensitivity/variability introduced in the structural response of the final design due to input uncertainties.
The optimization algorithms (mathematical programming, genetic algorithms, etc.) employed nowadays to implement RBDO and RDO approaches, as well as the utilized procedures for reliability/variability estimations (e.g. various Monte Carlo simulation-based methods), involve expensive computations due to successive linear/nonlinear/eigenvalue/dynamic analyses required. The enormous computational burden associated with such multiple large-scale analyses may be substantially reduced by appropriately handling the most demanding tasks in terms of processing power and storage space needs: the successive systems of linear equations with multiple left- and/or right-hand sides that need to be solved. For this purpose, customized versions of iterative solution methods are presented, which are equipped with appropriate preconditioning techniques to accelerate convergence during successive solutions.
Large-scale RBDO and RDO examples involving multiple linear/nonlinear/eigenvalue/dynamic analyses are used to show that such demanding problems can be handled in a computationally efficient manner. It is demonstrated that, with the appropriate know-how, the use of RBDO and RDO in real world cases is computationally feasible despite the enormous computing requirements induced.
Risk average optimal control problem for elliptic PDEs with uncertain coefficients
We consider a risk averse optimal control problem for an elliptic PDE with uncertain coefficients. The control is a deterministic distributed forcing term and is determined by minimizing the expected L2-distance between the state (solution of the PDE) and a target deterministic function. An L2-regularization term is added to the cost functional.
We consider a finite element discretization of the underlying PDE and derive an error estimate on the optimal control.
Concerning the approximation of the expectation in the cost functional and the practical computation of the optimal control, we analyze and compare two strategies.
In the first one, the expectation is approximated by either a Monte Carlo estimator or a deterministic quadrature on Gauss points, assuming that the randomness is effectively parametrized by a small number of random variables. Then, a steepest descent algorithm is used to find the discrete optimal control.
The second strategy, named Monte Carlo Stochastic Approximation is again based on a steepest-descent type algorithm. However the expectation in the computation of the steepest descent is approximated with independent Monte Carlo estimators at each iteration using possibly a very small sample size. The sample size and possibly the mesh size in the finite element approximation could vary during the iterations. We present error estimates and complexity analysis for both strategies and compare them on few numerical test cases.
Two Step Uncertainty Quantification Using Gradient Enhanced Stochastic Collocation for Geometric Uncertainties
Uncertainty quantification(UQ) for Fluid-Structure Interaction problems is challenging in terms of the computational cost, time and efficiency. Sampling algorithms like Monte-Carlo(MC) are not feasible for these problems due to constraints in computational time and cost. Random geometry variation arises due to manufacturing tolerances, icing phenomenon, wear and tear during operation etc. Quantification of these geometric uncertainties is challenging due to large number of geometric parameters resulting in a high dimensional stochastic problem.
A two-step UQ using the gradient information obtained by solving the adjoint equation is employed in the current study. A gradient enhanced stochastic collocation(GESC) using polynomial chaos(PC) approach is used. The uncertainty in geometry is represented by Karhunen-Loeve expansion with predefined covariance function. In first step, using the sensitivity information, the parameters that don't have major influence on output quantity of interest(QoI) are identified. A reduction in dimension of random space is achieved by setting these parameters as deterministic. The QoI is represented by a PC representation. A set of collocation points is selected on the stochastic domain and the FEM model along with the adjoint equation is solved at each of these points to obtain QoI and gradient of QoI with respect to each of the uncertain input parameters. The stochastic collocation(SC) strategy is modified to incorporate the additional gradient information. The deterministic coefficients in the PC expansion of the QoI are determined by a least square regression of system of equation that contains QoI and gradients of QoI. The method is tested for a cylinder in flow with uncertain geometric perturbations and uncertainties in drag(QoI) is evaluated. The accuracy is compared with SC and MC. The method is computationally more efficient than SC and MC. The method can be extended for shape optimization problems considering the uncertainty in geometry.
A clustering method for uncertainty propagation with dependent inputs
Dr. Robin Richardson | CWI Amsterdam | Netherlands
For uncertainty propagation with multivariate inputs, a basic assumption
underlying many methods is that the elements of the input vector are
mutually independent. We propose a new method, based on clustering, for
cases where the inputs are dependent. In this method, the cluster
centers and associated cluster sizes are used as nodes and weights for a
quadrature rule by which moments of the model output can be efficiently
estimated. The computational cost of determining the centers and weights
is small. The clustering approach can be used for non-Gaussian inputs as
well as for situations where the distribution of the inputs is unknown
and only a sample of inputs is available. No fitting of the input
distribution is needed.
We demonstrate the performance of the clustering method using test
functions and a CFD benchmark case (lid-driven cavity flow). Tests with
input dimension up to 16 are included, showing strong performance in
tests with high correlation between inputs.
Daniel Walter | Technische Universität München | Germany
We address the problem of identifying parameters
of a physical process, described by a system of elliptic partial
differential equations, from
previously collected experimental data. In many applications, data is
scarcely available and polluted by measurement errors. To obtain
reliable estimates of the parameters, this uncertainty in the
measurements has to be taken
into account in the design of the underlying experiment. To this
formulate an optimal design problem based on the asymptotic covariance
matrix of a least-squares estimator for the parameters, which depends on
number, position, and quality of the measurement sensors. The
measurement setup is
modeled by a sum of Dirac-delta functions on the spatial
experimental domain, which corresponds to a finite number of pointwise
measurements of the PDE solutions. We discuss optimality conditions, the discrete approximation as well as the efficient algorithmic solution of the optimal design problem.
Generalized Information Reuse for Optimization Under Uncertainty of Non-Sample Average Metrics
Laurence Cook | University of Cambridge | United Kingdom
PhD Jerome Jarrett | University of Cambridge | United Kingdom
Prof. Karen Willcox | Massachusetts Institute of Technology | United States
The importance of accounting for uncertainties in the design of engineering systems is becoming increasingly recognized. This requires the behavior of quantities of interest under uncertainty to be quantified at each optimization iteration - a computationally expensive endeavor. When the number of uncertainties is high, Monte-Carlo sampling based uncertainty quantification becomes the most feasible approach, and multi-fidelity methods can significantly reduce the computational cost of such an approach. Existing multi-fidelity methods in this context rely on being able to express the optimization objective function as a sample average. This is appropriate in many cases (for example, if the objective is based on statistical moments); however, there are metrics for optimization under uncertainty that are desirable to use but that cannot be expressed as sample averages, and so cannot directly make use of existing multi-fidelity methods. In this work we consider an ``information reuse" multi-fidelity method that treats information computed at previous optimization iterations as lower-fidelity models. We generalize the method to be applicable to non-sample average metrics. The extension makes use of bootstrapping to estimate not only the error of estimators of the metrics but also the correlation between estimators at different fidelities. In this work we outline this generalized information reuse method, we propose solutions to challenges in implementing the approach numerically, and we apply it to algebraic test problems and an acoustic horn design problem where its performance is compared with naive Monte-Carlo sampling. We consider the horsetail matching metric and quantile function as the non-sample average metrics, and compare optimizations using these and generalized information reuse with optimizations using statistical moments and standard information reuse.
Robust optimization of PDE constrained systems using a multilevel Monte Carlo method
We consider PDE constrained optimization problems where the partial differential equation has uncertain coefficients modeled by means of random variables or random fields. The goal of the optimization is to determine a robust optimum, i.e., an optimum that is satisfactory in a broad parameter range, and as insensitive as possible to parameter uncertainties. Many goal functions can be defined that attempt to solve this problem. They vary in computational cost and in the robustness of the solution. In this talk, we focus on optimizing the expected value of a tracking type objective with an additional penalty on the variance of the state. The gradient and Hessian corresponding to such a cost functional also contain expected value operators. Since the stochastic space is often high dimensional, a multilevel (quasi-) Monte Carlo method is presented to efficiently calculate the gradient and the Hessian. If one is careful, the resulting estimated quantities are the exact gradient and Hessian of the estimated cost functional, which is important in practice for some optimization algorithms.
The convergence behavior is illustrated using a gradient and a Hessian based optimization method for a model elliptic diffusion problem with lognormal diffusion coefficient and optionally an additional nonlinear reaction term. The evolution of the variances on each of the levels during the optimization procedure leads to a practical strategy for determining how many and which samples to use. We also investigate the necessary tolerances on the mean squared error of the estimated quantities. Finally, a practical algorithm is presented and tested on a problem with a large number of optimization variables and a large number of uncertainties.
Derivative-Free Stochastic Constrained Optimization using Gaussian Processes, with Application to a Scramjet
Friedrich Menhorn | Technical University of Munich | Germany
Prof. Youssef Marzouk | Massachusetts Institute of Technology | United States
We present recent additions to SNOWPAC (Stochastic Nonlinear Optimization With Path-Augmented Constraints) [Augustin, Marzouk, 2017], a method for stochastic nonlinear constrained derivative-free optimization using a trust region approach and Gaussian processes for noise reduction. SNOWPAC operates on robust optimization problems where sample estimates are used to evaluate the robustness measure. Since the black box model evaluations underlying the sample estimates are assumed to be expensive, SNOWPAC employs Gaussian process regression to mitigate the impact of noise resulting from small sample sizes. The cost of Gaussian process regression, however, grows cubically with the number of training data. Thus, we propose approximate Gaussian process methods in SNOWPAC to handle large numbers of optimization iterations. Three different methods are implemented and rigorously tested: Subset of Regressors, Deterministic Training Conditional and Fully Independent Training Conditional. For this, we also propose a new type of regression benchmark derived from the Hock-Schittkowski benchmark [Schittkowski, 2008]. Additionally, we directly compare the methods in SNOWPAC. Finally, we apply SNOWPAC to a complex and expensive SCRAMJET simulation to show its performance and viability.
Evaluation of Different Robustness Measures for Crashworthiness Problems
Crashworthiness aims to reduce the number of fatalities and serious injuries in traffic by focusing on occupant protection via new and improved vehicle design and safety countermeasures. Due to uncertainties, there is often a trade-off between improvement of the performance under nominal conditions and robustness. Uncertainties can be categorized into two groups: aleatoric and epistemic uncertainties. Aleatoric uncertainties describe random variations of physical properties of a system or a product, e.g. manufacturing-based tolerances of material and geometry parameters. The other group arises due to lack of knowledge or incomplete and limited valid information, e.g. higher degrees of freedom in the concept phase of products, missing constraints, or simplification of models.
In general, robustness is a measure of system insensitivity to inputs variation and thus should be integrated in an optimization process. One way for evaluating the robustness of crash loaded structures as basis for more detailed investigations is the application of the “robustness index” described by Andricevic. This index is a scalar quantity derived by a weighting process of four factors: the degree of achievement of objectives, the objective’s variation around the mean value, the range of variation as well as the distance of not acceptable single results with respect to design and dimensioning limits.
A more complex and time consuming process is to conduct a robustness analysis by analyzing all scatter diagrams and histograms of the derived samples, e.g. via coefficient of determination, principal component analysis, correlation matrix and correlation coefficients, etc. and the characteristics of the deformation behavior, e.g. system instabilities like buckling as well as using regulatory limits. To reduce complexity, i.e. number of relevant variables, a suitable methodical ansatz will be presented to evaluate sensitivities and therefore robustness, interpret sensitivity curves and thus derive a basis for describing robust system behavior.
Prof. Bozidar Stojadinovic | ETH Zurich | Switzerland
In the last decade, we observed a significant increase in seismicity caused by anthropogenic activities, such as oil and gas extraction, geothermal projects, and CO2 carbon sequestration. Induced seismicity has characteristics that distinguish it from natural seismicity. In particular, the seismic rate is a consequence of an interaction between time-variant anthropogenic activities (e.g. fluid injection) and natural characteristics (e.g. presence of active faults) that are location-dependent. One peculiar difference between natural and induced seismicity is that both hazard and risk are time-dependent. In this talk, we present a non-uniform Poisson process to model induced seismicity associated with deep underground fluid injection. We describe the time-variant rate of the Poissonian process as a function of the fluid-injection rate and a set of physical parameters describing the ground characteristics. We treat this set of parameters as random variables, and their uncertainty reflects the source-to-source variability. The model, which has two layers of uncertainties, can, therefore, be classified as a hierarchical Bayesian model. A major strength of the Bayesian approach is that it allows uncertainties and expert judgments about the ground parameters to be encoded into a joint prior distribution of the model parameters. Moreover, as soon as the project starts and physical information become available, the Bayesian framework allows for the computation of the posterior distribution for the ground parameters, the formulation of predictive models for the Poissonian process, and borrowing statistical strength from data. After presenting the updating rules and strategies for the proposed Bayesian model, we conclude our talk by presenting a forecast model for predicting the number and the magnitude of induced events for a given future time frame.
Sampling-free Bayesian inversion with adaptive hierarchical tensor representation
The statistical Bayesian approach is a natural setting to alleviate the inherent ill-posedness of inverse problems by assigning probability densities to the considered calibration parameters.
A sampling-free approach to Bayesian inversion with an explicit representation of the parameter densities is developed.
The delicate task to chose a suitable prior density is examined in terms of a coordinate transformation according to likelihood informations.
The proposed sampling-free approach is discussed in the context of hierarchical tensor representations, which are employed for the adaptive evaluation of a random PDE (the forward problem) in orthogonal chaos polynomials and the subsequent high-dimensional quadrature of the log-likelihood.
This modern compression technique alleviates the curse of dimensionality by hierarchical subspace approximations of the respective low-rank (solution) manifolds.
All required computations can then be carried out efficiently in the low-rank format.
An interesting aspect is the evaluation of the exponential of the Bayesian potential by means of an adaptive Runge-Kutta method with tensors.
All discretization parameters are adjusted adaptively based on a posteriori error estimators or indicators.
Numerical experiments, involving affine and log-normal diffusion, demonstrate the performance and confirm the theoretical results.
Measure-Theoretic Stochastic Inversion of Groundwater Problems
In this work, we consider a recently developed measure-theoretic approach for solving stochastic inverse problems. We show that a sample based, non-intrusive, computational algorithm produces exact solutions to the stochastic inverse problem using a certain class of surrogate response surfaces. We use adjoint based techniques to estimate and correct for numerical error in the surrogate while simultaneously increasing the local order of the surrogate response surface. The use of the resulting enhanced surrogates are two-fold where we observe an increase in accuracy and decrease in computational complexity in computation of probabilities of specified events. The methodology may also be utilized in adaptive error control.
Impacts of forcing due to turbulent boundary layer uncertainty on modal response functions in structural acoustics
Dr. Sheri Martinelli | The Pennsylvania State University | United States
PhD Andrew Wixom | The Pennsylvania State University | United States
PhD Micah Shepherd | The Pennsylvania State University | United States
PhD Stephen Hambric | The Pennsylvania State University | United States
PhD Robert Campbell | The Pennsylvania State University | United States
An understanding of the influence of the fluctuating wall-pressure field beneath a turbulent boundary layer (TBL) is critical in the design of structures subject to fatigue loading or noise radiation (e.g., aircraft, watercraft or automobiles). Many models have been developed to provide a statistical description of this forcing, based mainly on flat plate assumptions augmented by fits to empirical data. One such model, by Corcos [G.M. Corcos / J. Sound. Vib. 6: 59–70, 1967], poses an exponential correlation model that incorporates empirically-derived parameters treated as universal constants. In this work, we expand the process of Corcos in terms of its parameters, which we treat as uncertain, using the technique of Karhunen-Loève. We then incorporate the truncated result into a generalized polynomial chaos (gPC) expansion of the modal response – the solution to the forward model. Given the gPC coefficients for a structure of interest, the resulting representation of the forward model can be used as an efficient means to generate data [Y. Marzouk and D. Xiu / Commun. Comput. Phys. 6(4): 826-847, 2009] in order to estimate the posterior distribution of the unknown parameters. The approach also produces expressions for the structure’s modal response, both eigenvalues and eigenfunctions, in terms of the random variables, permitting the study of the impacts of uncertain forcing on the mode shapes and natural frequencies. Results will be compared to standard estimates which typically assume Gaussian densities on system output.
Fast Bayesian model calibration by using non-intrusive interpolating surrogate methods
Dr.ir. Benjamin Sanderse | CWI Amsterdam | Netherlands
A popular method in UQ used to assess epistemic uncertainties of a model is Bayesian model calibration, in which the model discrepancy is calibrated via the parameters of the model. This yields a full probability density function on the parameters, called the posterior. Typically Monte Carlo methods are used to sample from the posterior, but this is computationally intractable for many numerical models.
Therefore, we employ a surrogate model, which is constructed by interpolation. Normally the nodes are chosen with respect to an input pdf. In our calibration framework this approach cannot be applied since the posterior is not known explicitly. We propose a new technique: adaptive construction of the surrogate model with weighted Leja nodes. These nodes are by construction nested, stable, and refine in the region of high posterior density.
The application that we consider is wind turbine wake simulation, where many uncertainties have an influence on the efficiency and life time.