The evaluation of failure probabilities is a fundamental problem in reliability analysis and risk management of systems with uncertain inputs. We consider systems described by PDEs with random coefficients together with efficient approximation schemes. This includes stochastic finite elements, collocation, reduced basis, and advanced Monte Carlo methods. Efficient evaluation and updating of small failure probabilities and rare events remains a significant computational challenge. This mini-symposium brings together tools from applied probability, numerical analysis, and computational science and engineering. We showcase advances in analysis and computational treatment of rare events and failure probabilities, including variance reduction, advanced meta-models, and multilevel Monte Carlo.
14:00
Cross entropy-based importance sampling for first passage probability estimation of linear structures with parameter uncertainties
Oindrila Kanjilal | Technical University of Munich | Germany
Show details
Authors:
Oindrila Kanjilal | Technical University of Munich | Germany
Iason Papaioannou | Technical University of Munich | Germany
Daniel Straub | Technical University of Munich | Germany
Reliability assessment of structures with randomness in both the system parameter specifications and external excitations remains one of the most challenging problems in structural reliability. This contribution presents an efficient importance sampling strategy for estimating the first passage probability in linear systems subjected to Gaussian process excitations and parametric uncertainties. Therein, the failure probability is expressed as an expectation of conditional first passage probabilities, where the conditioning is with respect to the random system parameters. An effective importance sampling (IS) density over the random system parameters is constructed using the cross entropy (CE) method. The CE method optimizes over a parametric family of distributions to determine an IS density that has the minimum Kullback-Leibler divergence from the theoretically optimal sampling density. The CE optimization problem is solved for a series of target densities that gradually approach the optimal IS density. To define these intermediate densities, a smoothening of the conditional first passage probabilities is introduced, wherein the conditional probabilities are evaluated using analytical approximations. Once the IS density associated with the uncertain parameters is obtained, a suitable IS density of the random excitations is introduced to estimate the probability of failure.
14:30
- CANCELED - Kernel-based adaptive models with tuned regularity parameters for rare-event probability estimation
Jean-Marc Bourinet | SIGMA Clermont | France
Show details
Author:
Jean-Marc Bourinet | SIGMA Clermont | France
Estimating probabilities of rare failure events is often made in the context of computationally expensive numerical models. Some of these problems can be solved by constructing approximate models trained on sets of data pairs (x_i,y_i) where y_i are outputs of the true numerical model evaluated at points x_i. The talk addresses the construction of such approximations known as surrogate models in an adaptive way with sequentially enriched training sets. Kernel-based approximations are investigated in the form of support vector regression based on the \epsilon-insensitive loss function. We specifically consider a Matérn kernel with tuned regularity and length scale parameters for each input dimension in order to address potentially non-smooth models. Several application examples are presented in the field of reliability analysis, which clearly show the benefits of using such a kernel.
15:00
Towards a global framework for reliability analysis based on active learning
Maliki Moustapha | ETH Zurich | Switzerland
Show details
Authors:
Maliki Moustapha | ETH Zurich | Switzerland
Stefano Marelli | ETH Zurich | Switzerland
Bruno Sudret | ETH Zurich | Switzerland
Since its introduction in the field of reliability analysis, active learning has been increasingly used for the solution of complex reliability problems at a considerably smaller cost. The basic idea is to adaptively build an accurate approximation of the limit-state surface by sparsely covering the input space. In the early contributions, a surrogate model, typically Kriging, was updated through a so-called learning function and then used for the estimation of the failure probability with Monte Carlo simulation. Later, a considerable number of methods that draw on this idea have been proposed by merely modifying one or more elements of this framework.
In this talk, we first conduct a survey of active learning reliability methods. We then identify the basic ingredients that make the backbone of these approaches. Drawing on their similarity, we propose a global framework for active learning reliability that non-intrusively combines three different blocks: surrogate modelling, reliability analysis and a learning function. By wisely choosing each element of the framework, a solution scheme that is tailored to a specific type of problems can be devised, e.g. high-dimensional input or rare events. Furthermore, we analyze various strategies for multi-constraint problems (systems reliability), multi-points enrichment and stopping criteria. The findings of this benchmark are used to implement a global active learning reliability framework in UQLab.
15:30
- CANCELED - An active learning-based Gaussian process metamodelling strategy for estimating the probability distribution in forward UQ analysis
Ziqi Wang | Guangzhou University | China
Show details
Authors:
Ziqi Wang | Guangzhou University | China
Marco Broccardo | ETH Zurich | Switzerland
This talk presents an active learning-based Gaussian process (AL-GP) metamodelling method to estimate the cumulative as well as complementary cumulative distribution function (CDF/CCDF) for forward uncertainty quantification (UQ) problems. Within the field of UQ, previous studies focused on developing AL-GP approaches for rare event probability analysis of expensive black-box solvers. A naive iteration of these algorithms with respect to different CDF/CCDF threshold values would yield a discretized CDF/CCDF. However, this approach inevitably leads to a trade off between accuracy and computational efficiency since both depend (in the opposite way) on the selected discretization. In this study, a specialized error measure and a learning function are developed such that the resulting AL-GP method is able to efficiently estimate the CDF/CCDF for a specified range of interest without an explicit dependency on discretization. Particularly, the proposed AL-GP method is able to simultaneously provide accurate CDF and CCDF estimation in their median-low probability regions. Three numerical examples are introduced to test and verify the proposed method.