16:10
Inspection and repair of deteriorating structural systems: policy optimization with a heuristic approach
Elizabeth Bismut | TUM | Germany
Show details
Authors:
Elizabeth Bismut | TUM | Germany
Prof. Daniel Straub | TUM
Deterioration affects all infrastructures that support transport networks and energy supply and can lead to damage and failure. The prediction of the deterioration effects in structural systems is associated with high uncertainty. Regular inspections and monitoring systems can reduce this uncertainty, at a cost. There is therefore an economic case to optimize the monitoring and maintenance plans. For multi-component systems, the number of inspection and maintenance strategies increases exponentially with the number of components, and identifying the strategy with the lowest total life-cycle cost is a PSPACE-complete problem. For realistic structures, the optimization problem can only be solved approximately. We propose to approximate the solution with a heuristics approach, where decision policies are defined by a set of parameters, both discrete – such as the number of components inspected – and continuous. The reliability of the system is computed by modeling the deterioration process for each component of the system with a hierarchical dynamic Bayesian network, where the states of the random variables are discretized. The probability of failure of the system is updated with the inspection and repair history by Bayesian inference, here reduced to linear matrix operations. The expected life-cycle cost of a decision policy is estimated with Monte Carlo simulation over the inspection and repair history. The fitness of the approximate solution for the optimization problem relies on the choice of the heuristics. In particular, the latter should take into account a proxy for the value of information of inspecting one particular component at a point in time, given the past inspection and repair history of the structure. To reduce the number of function evaluations, we also hypothesize that the objective function is convex.
16:12
Bayesian Updating of Rare Events with Meta Model-Based Reliability Methods
Max Ehre | TU München | Germany
Show details
Authors:
Max Ehre | TU München | Germany
Dr. Iason Papaioannou
Prof. Daniel Straub
Data on the performance of engineering systems can be used to update system parameters and predictions of the system reliability through a Bayesian analysis. The recently introduced BUS (Bayesian Updating with Structural reliability methods) framework allows for the solution of Bayesian inference-type problems by means of structural reliability methods. In brief, it recasts acceptance-rejection sampling as a reliability problem. This bears two advantages: First, reliability methods such as subset simulation or importance sampling can be incorporated in order to maintain computational efficiency even for problems with peaky likelihood functions. Second, it allows for natively integrating data assimilation routines in the reliability analysis workflow by using the same method for both tasks. Since most reliability method require a considerable number of typically costly evaluations of the numerical model, it is desirable to employ meta model formulations that reproduce model outcomes as accurately as possible at a small fraction of the computational expenses. Meta model-based reliability methods employ a limited number of model evaluations to fit/learn the parameters of a mathematical model and then employ the latter model to perform the reliability assessment. We employ spectral decomposition-based meta models, which - up to slight deviations in formalism and construction - are based on the common concept of constructing functional bases with respect to the input random variable space and projecting the numerical model onto those using an experimental design. Combining reliability methods with spectral meta models within the BUS framework, we aim at obtaining accurate and efficient estimates of rare event probabilities conditional on system performance data.
16:14
Uncertainty reduction for complex systems with higher dimensional decomposed, optimal solution spaces
Marco Daub | TU München | Germany
Show details
Author:
Marco Daub | TU München | Germany
Complex engineering systems must meet several restrictions. These requirements are often unknown in early development phases and can only be estimated. Instead of searching for a single optimal and feasible design, the aim of the approach presented here is to find a set of feasible designs by identifying a so-called solution space. In recent related studies, the focus has been put on seeking optimal box-shaped solution spaces, which are maximal with respect to their volume and allow a decoupled consideration of all design variables. However, those boxes typically cover only a small portion of the complete solution space. Hence, accepting a certain coupling of corresponding design variables can reduce this loss of solution space. Mathematically spoken, a specific subset of the complete solution space is sought, which can be decomposed into higher dimensional sets where their Cartesian product has a maximal volume. For that purpose, an algorithm to solve this optimization problem is presented after a short theoretical introduction of the background context.
16:16
CROSS ENTROPY-BASED IMPORTANCE SAMPLING IN LOW AND HIGH DIMENSIONS WITH A NEW MIXTURE MODEL
Sebastian Geyer | TU München | Germany
Show details
Authors:
Sebastian Geyer | TU München | Germany
Dr. Iason Papaioannou | TU München | Germany
Prof. Daniel Straub | TU München | Germany
In structural reliability analysis, the failure event of an engineering structure is defined through a potentially high-dimensional probability integral. In most cases, this integral cannot be solved analytically. The probability of failure is thus often estimated with Monte Carlo-based sampling approaches due to their ability to cope with complex numerical models. Alternatively, the importance sampling (IS) method significantly improves the efficiency of crude Monte Carlo, provided that an effective IS proposal density is chosen. The efficiency of IS with Gaussian proposal densities decreases with an increasing number of random variables; this is due to the fact that the probability mass in an equivalent high dimensional standard normal space concentrates around the surface of a hypersphere, known as the important ring. To account for this behavior, the von Mises-Fisher mixture model has been proposed as parametric density in IS for high-dimensional problems. The parameters of this distribution model can be estimated through application of the Cross Entropy (CE) method. The CE method is an adaptive sampling approach that determines the sampling density through minimizing the Kullback-Leibler divergence between the theoretically optimal IS density and a chosen parametric family of distributions. In this contribution, we combine the von Mises-Fisher distribution with the Nakagami distribution, which acts as the IS density for the radius of the hypersphere. For the parameter updating of the new mixture model within the CE method, we propose a modified version of the Expectation-Maximization (EM) algorithm and introduce the corresponding updating rules. Our study shows that the proposed mixture model enables efficient IS in both low- and high-dimensional component and system reliability problems.
16:18
Risk assessment with enhanced Bayesian network: application to hydropower station
Hector Diego Estrada-Lugo | University of Liverpool | United Kingdom
Show details
Authors:
Hector Diego Estrada-Lugo | University of Liverpool | United Kingdom
Prof. Edoardo Patelli | Institute for Risk and Uncertainty | United Kingdom
Technological facilities might seriously be affected by extreme weather conditions (among other reasons) that can result in NaTech events (technological disasters triggered by natural threats). Also, the uncertainty factor associated with the global warming effect needs to be taken into account in the vulnerability quantification since it is not negligible. For these reasons, risk assessments should be carried out in order to evaluate causes of previously
occurred accidents or to provide information to prevent them from happening. The
Bayesian Network (BN) method has increasingly been used to perform risk assessments proving to be a reliable and powerful tool. However, this method is restricted to discrete and Gaussian distributions, which lead to discretization of continuous data and impoverishing of information. Furthermore, the high uncertainty of variables involved leads to
imprecise predictions or analyses with low-veracity results. In response to that, it has been found that BN enhanced with Reliability methods (EBN) helps to overcome limitations
of traditional BN and provide the quantification of the uncertainty affecting the outputs as result of the imprecision and random nature of available data. It is expected that discrete and continuous probabilistic distributions are implemented in the method. So that, the veracity of the risk assessment is as high as possible. The EBN method has been implemented in the general purpose software OpenCossan. In this project, climate factors like heavy raining, wind rate and flooding were considered
as conditions that can trigger severe accidents. Some data from the Lianghekou
hydropower station in the southwest of China are taken into account for this case study.
However, when information is missing, suitable data for this case have been selected from additional literature. With results obtained, future research will be done to overcome the limitations of the presented method.
16:20
Probabilistic reduced-order modeling for stochastic partial differential equations
Constantin Grigo | TU München | Germany
Show details
Authors:
Constantin Grigo | TU München | Germany
Prof. Phaedon-Stelios Koutsourelakis | TU München | Germany
We discuss a Bayesian formulation to coarse-graining (CG) of PDEs where the coefficients (e.g. material parameters) exhibit random, fine scale variability. The direct solution to such problems requires grids that are small enough to resolve this fine scale variability which
unavoidably requires the repeated solution of very large systems of algebraic equations. We establish a physically inspired, data-driven coarse-grained model which learns an effective, low-dimensional representation of the underlying random field that is predictive of the fine-grained (FG) model response. This ultimately allows to replace the computationally expensive FG model by a generative probabilistic model based on evaluating the much cheaper CG model several times. Moreover, the model yields probabilistic rather than single-point predictions, which enables the quantification of the unavoidable epistemic uncertainty that is present due to the information loss that occurs during the coarse-graining process. This in turn allows for an adaptive refinement of the CG model discretization until a user-specified model accuracy is reached.
16:22
Interpolative Approach to UQ of Non-Smooth Random Quantities in the Nonlinear Schrodinger Equation
Amir Sagiv | Tel Aviv University | Israel
Show details
Authors:
Amir Sagiv | Tel Aviv University | Israel
Prof. Gadi Fibich | Tel Aviv University | Israel
Prof. Adi Ditkowski | Tel Aviv University | Israel
We present a new interpolative, spectral approach for the numerical uncertainty quantification (UQ) of non-smooth properties of a random PDE. This approach is applied to the study of the Nonlinear Schrodinger equation (NLS) and to obtain new results in the field of nonlinear optics [1].
Given a nonlinear PDE with low-dimensional, smooth and random terms, the standard approach for computing the solution’s mean and moments is the polynomial chaos expansion (PCE). Due to the spectral nature of the PCE, it is poorly convergent for non-continuous functions. Our research suggests, however, the interpolant obtained by the PCE algorithm can be the departure point for a UQ method for non-smooth functions, when these depend on the smoothly-varying random solution.
The PCE interpolant of the NLS solution allows us to compute the statistical parameters of non-smooth global functionals of the solution, e.g., a coherent laser beam’s complex phase. We sample the PCE interpolant arbitrarily many times at a low computational cost. Exploiting the PCE’s pointwise convergence, we obtain “Monte-Carlo”-like results at the computational cost of a spectral method!
Computing the PCE interpolant can be more informative than the computed moments. For example, we study the random collision of solitons in the NLS. In this case, interpolating the solution in the random parameter space reveals that there is a finite number of possible outcomes for these collisions. A standard UQ analysis, which is based on moments and distributions, is therefore inadequate for this study.
For the computation of probability distribution function (PDF) of a non-smooth variable, we show that local interpolants of the random solution, such as hp-pc gPC and splines, are preferable. This is due to their less oscillatory behavior, and due to the PDF's sensitivity to small errors in close-to-zero derivatives.
[1] https://arxiv.org/abs/1705.01137
16:24
Transdimensional MCMC algorithms for Bayesian inference of random fields
Felipe Uribe | TU München | Germany
Show details
Authors:
Felipe Uribe | TU München | Germany
Dr. Iason Papaioannou | TU München | Germany
Wolfgang Betz | TU München | Germany
Prof. Daniel Straub | TU München | Germany
The reduction of uncertainty in model properties is of primary concern in the scientific and engineering community. Observed data can be used in combination with mathematical models to obtain information about these parameters. System properties often vary in space and random field discretization techniques are typically required for their representation. A major challenge is associated with the selection of the number of terms in the representation that provides an accurate approximation. In this context, hierarchical Bayesian inference provides a general framework that specifies a prior distribution for both, the number of terms in the random field discretization (dimension) and the uncertain parameters. This type of inferences can be handled using Markov chain Monte Carlo (MCMC) algorithms that are suitable to perform in spaces of varying dimensions and simultaneously update the dimension and parameters.
In this study, the Karhunen-Loève expansion is applied for the representation of the target random field, where the truncation of the expansion is considered as a hyperparameter. A two-stage MCMC and a Metropolis-within-Gibbs algorithms, that are able to explore the space by jumping between different dimensions, are used for the Bayesian analysis. Furthermore, a strategy to improve the efficiency of the algorithms is discussed. The proposed approach is tested using a cantilever beam with spatially variable flexibility in order to demonstrate its accuracy and performance.
References:
[1] S. Cotter, G. Roberts, A. Stuart, and A. White (2013). “MCMC Methods for Functions: Modifying Old Algorithms to Make Them Faster”. In: Statistical Science 28.3, pp. 424–446.
[2] P. J. Green (2003). “Trans-dimensional Markov chain Monte Carlo”. In: Highly Structured Stochastic Systems. Oxford University Press, Chap. 6, pp. 179–198.