The foundations of predictive computational science involve the development of methods that address all of the sources of uncertainty in the prediction of events in the physical universe: 1) the system of logic that is of sufficient depth to rigorously manage and
quantify uncertainty; 2) the uncertainty in observational data; 3) the uncertainty in model selection and how to cope with model inadequacy; 4) uncertainty in model parameters; and 5) uncertainty due to discretizations of proposed mathematical models. To these issues one must add the development of efficient computational methods to handle uncertainties and to solve often large stochastic systems, including parallel sampling methods, stochastic solvers, and methods for statistical inverse analysis. This presentation describes general Bayesian approaches that address all of these sources of uncertainty through use of the Occam Plausibility Algorithm, OPAL, an adaptive process
that enables selection of plausible models among sets of model classes while also addressing model inadequacies through model validation processes. Applications of these methods to models of tumor growth in living tissue, coarse-graining of molecular models, and other areas are discussed.
Challenges in Bayesian Uncertainty Quantification and Propagation for Structural Dynamics Simulations
Prof. Costas Papadimitriou | University of Thessaly | Greece
Bayesian analysis provides a logical framework for quantifying and propagating uncertainties in structural dynamics simulations, integrating physics-based models and experimental/monitoring data. The framework can also be used to optimally allocate experimental resources for maximizing the information contained in the data for the purpose of uncertainty quantification and propagation (UQ+P). Bayesian tools such as Laplace asymptotic
approximations and sampling algorithms require a moderate to very large number of system re-analyses to be performed. Computational demands may become excessive, depending on the model complexity, the time required to perform a simulation, and the number of model runs. The process of data-driven UQ+P is important for structural health diagnosis,
and decision making for cost-effective system design and maintenance actions that meet performance/safety requirements.
This lecture will cover selected theoretical and computational developments of a Bayesian UQ+P framework for structural dynamics simulations. Theoretical challenges related to model embedded uncertainty and model prediction uncertainty will first be addressed for properly quantifying the uncertainty in the structural model parameters. Then computational challenges encountered in large-order finite element (FE) models of hundreds of
thousands or millions degrees of freedom, and/or localized nonlinear actions activated during system operation, will be addressed. Efficient model reduction techniques, consistent with the model parameterization, will be presented to drastically speed up computations within the Bayesian UQ+P framework. Finally, a computationally efficient asymptotic approximation will be proposed to simplify information-based utility functions used for optimizing the placement of sensors in a structure. The structure of the approximation provides insight into the use of the prediction error spatial correlation to avoid sensor clustering, as well as the effect of the prior uncertainty on the optimal sensor configuration. Applications will focus on the use of the framework for FE model selection/calibration, as well as structural health monitoring using vibration measurements.
Dr. Isabell Franck | IPT - Insight Perspective Technologies GmbH / TUM | Germany
Prof. Stelios Koutsourelakis | TUM Professur fuer Kontinuumsmechanik | Germany
While calibration can almost always be archived it becomes problematic if the underlying model is incorrect, which will lead to wrong predictions and interpretations. Traditional approaches use an additional regression model (e.g. GPs) to account for an underlying model error. This can either violate physical constraints and/or is infeasible in high dimensions. In this work, we open the black box and unfold conservation and constitutive laws to estimate model discrepancies accurately. We use Variational Bayes to decrease computational costs and investigate this problem within a high-dimensional inverse problem.
Optimal reduction of observations for Bayesian inference
We consider the Bayesian inference problem of fitting a Gaussian linear model from overabundant observations corrupted by an independent Gaussian noise. The goal of this work is to determine low dimensional projection subspaces to project the observations, without degrading the posterior distribution of the inferred model. We target in particular applications in big data inversion and assimilation related to geosciences and weather forecasting.
In this work, we address two situations depending on the availability of the actual observations, corresponding to a posteriori reduction when the data are available and a priori reduction when they are not. The projection space is defined as the minimizer of cost functions derived from the information theory. Specifically, we consider and contrast strategies based on the minimization of the (possibly expected) Kullback-Leibler divergence, the log-det divergence, and the Shannon entropy. The optimization problems are formulated in terms of the reduced basis for the projection space. This formulation yields invariance properties (to rotation and scaling) that are subsequently exploited using efficient Riemannian optimization algorithms. We show that these algorithms can be particularly efficient when the noise structure of the linear model is correctly treated.
The strategies are first compared in a Bayesian linear regression setting, monitoring the convergence of the posterior distributions when the dimension of the projection space increases. The robustness of the a priori strategies is also numerically assessed. Finally, we discuss the extension of the approaches to nonlinear models.
In science, engineering and economy a large diversity of inverse problems exists. As is the same for forward problems also solving the inverse problems is generally done under uncertainties. These uncertainties have different origins, e.g. values of parameters which are not included in the inverse problem, the experimental quality or numerical aspects in the repeatedly solved forward problem during the inversion.
Additionally to these sources of uncertainties, regularization parameters required to solve the inverse problem in a robust manner often depend via discrepancy principles on vales (mainly the assumed the noise level) which are also never perfectly known.
The contribution discusses the possible scatter of results of inverse problems and by a sensitivity analysis, i.e. how these uncertainties can be assigned to uncertainties in the input parameters of the inverse problem.
The application examples are inspired by inverse problems occurring in Civil Engineering.
Bayesian techniques for parameter estimation in linear PDEs with noisy boundary conditions
Prof. Marco Iglesias | University of Nottingham | United Kingdom
PhD Zaid Sawlan | King Abdullah University of Science and Technology | Saudi Arabia
Prof. Marco Scavino | Universidad de la República | Uruguay
Prof. Raúl Tempone | King Abdullah University of Science and Technology | Saudi Arabia
Dr. Christopher Wood | University of Nottingham | United Kingdom
In this talk, we present a novel adaptation of a hierarchical Bayesian framework, introduced in [Ruggeri et al., 2016], for parameter estimation in linear time-dependent partial differential equations with noisy boundary conditions. For given model error assumptions, we obtain the joint likelihood of the quantities of interest and the initial and boundary conditions. The nuisance time-dependent boundary conditions are then marginalized out, and the use of a fast analytical approximation technique provided reliable estimates of the parameters of interest.
The application of the proposed method is exemplified to deal with the real problem posed by an experimental case study conducted in an environmental chamber, with measurements recorded every minute from temperature probes and heat flux sensors placed on both sides of a solid brick wall over a five-day period [Iglesias et al., 2017].
The unidimensional heat equation with unknown initial temperature and Dirichlet boundary conditions is used to model the heat transfer through the wall. After marginalizing the boundary conditions that act as nuisance parameters, we obtain the approximate a posteriori distributions for the wall parameters and the initial temperature. The results show that our technique reduces the bias error of the estimates of the wall parameters, compared to other approaches where the boundary conditions are assumed to be non-random. We calculate the information gain from the experiment, to recommend to the user how to efficiently minimize the duration of the measurement campaign and determine the path of the external temperature oscillation.
Finally, we introduce a sequential Bayesian setting for parametric inference in initial-boundary value problems related to linear parabolic partial differential equations. The performance of the new marginalized Ensemble Kalman filter algorithm is compared with the previous method through the analysis of the experimental data collected in the environmental chamber.
Dynamically adaptive data-driven simulation of extreme hydrological flows
Hydrological hazards such as storm surges and tsunamis are physically complex events that are very costly in loss of human life and economic productivity. Such disasters could be mitigated through improved emergency evacuation in real-time, and through development of resilient infrastructure using data-driven computational modeling. We investigate the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model is based on adaptive mesh refinement (AMR), a process by which a computational mesh adapts in time and space to the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated to improve the veracity of the solution. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested with twin experiments and actual data from the event of Chile tsunami of February 27 2010.
Prof. Michael Shields | Johns Hopkins University | United States
PhD Dimitris Giovanis | Johns Hopkins University
The quantification of uncertainties (UQ) in the design process of engineering structures is an important field with growing interest. While most of modern design codes rely on partial safety factors calibrated to target structural reliability, stochastic modeling requires a simulation-based approach in which a vast number of evaluations of the response are performed in order to identify its probability law. In this work, a new adaptive stochastic simulation-based method for UQ based on observing variations in the projection of the solution on the Grassmann and Stiefel manifolds. These types of manifolds have a nonlinear geometry and as a consequence, attributes of the response that cannot be identified in Euclidean spaces become tractable. By using the singular value decomposition (SVD) for the high-dimensional solution corresponding to a realization of the input parameters with ambiguous characteristics, we obtain a point located in the Grassmann manifold and subsequently in its corresponding tangent space (i.e Stiefel manifold). In a manner similar to the simplex stochastic collocation method  the input parameter space is discretized into a set of simplex elements using a Delaunay triangulation. Within each element, variations of the corresponding response in the Grassmann manifold are estimated by measuring the geodesic Grassmann distances between the vertex subspaces. Elements with large variations on the Grassman manifold are subsampled and the elements refined. The same procedure is repeated for the points located now in the Stiefel manifold which, since it is a tangent space is flat and is therefore more amenable to interpolation .
 Witteveen, Jeroen AS, and Gianluca Iaccarino. "Simplex stochastic collocation with random sampling and extrapolation for nonhypercube probability spaces." SIAM Journal on Scientific Computing 34.2 (2012): A814-A838.
 Amsallem, David, and Charbel Farhat. "Interpolation method for adapting reduced-order models and application to aeroelasticity." AIAA journal 46.7 (2008): 1803-1813.
Modeling high-dimensional inputs with copulas for uncertainty quantification problems
In the context of uncertainty quantification (UQ) for computational
models, a probabilistic representation of the input parameters is
necessary. In most real-world settings where a data set of inputs is
available, a joint distribution needs to be inferred, which usually shows
dependence between the different parameters. Characterizing
dependencies in high dimensions may be challenging. Moreover, some
efficient techniques such as polynomial chaos expansions (PCE) require that
the input parameters are independent. In practical cases, physical input
variables are transformed into independent auxiliary variables,
e.g. through the Rosenblatt transform.
In this contribution we propose to model the joint distribution of the
input variables by vine copulas. Copulas are families of joint
probability distributions that allow one to represent dependencies among
variables separately from their marginal distributions. Vine copulas further
ease the copula estimation problem in high dimension by factorizing the
copula into conditional pair copulas of the components.
Additionally, their formulation makes it easy to derive both the
direct and inverse Rosenblatt transforms.
This property provides an efficient way to both de-correlate the input
variables (e.g. to perform PCE) and to generate space-filling sampling of the
input parameters, e.g. by back-transforming Latin Hypercube samples or
quasi-random numbers such as Sobol' sequences.
The resulting sample enables one to perform a large class of UQ analyses (e.g.
reliability analysis or propagation by PCE) at limited computational costs.
We exemplify the proposed method on previously published data from
earthquake signals. The parameters of a synthetic earthquake generator are
represented both by vine copulas and by Gaussian copula. The latter, employed
in previous publications, is used here for comparison. The synthetic earthquake
signals are then used as an input of simple mechanical oscillators and the
resulting statistics of the output displacements are compared.
Numbers or Structures: On the Futures of Structural Reliability?
In the beginning of structural the problems were mainly that a mathematical/mechanical model was given for which certain properties have to be found, in general extreme value distributions.
Over the years this remained the undisputed main topic: Given a model, produce an estimate for failure probabilities.
There were different approaches. In Monte Carlo based procedures it was tried to improve the crude simulation by refined concepts as importance sampling or subset simulation.
In FORM/SORM concepts one starts from introducing a simplified geometric structure approximating the limit state domain and the initial estimate obtained from this region is refined by
importance sampling or response surfaces. Often both concepts were mixed to hybrid methods.
Now, one sees that the problems in structural reliability are changing.
Problems are involving high dimensional spaces and limit state functions which are outputs of finite element packages. So there is often no more a well defined mathematical model, but the functions values are coming from something like a black box.
So, with a simple underlying structure lost, is it still the main task of structural reliability to produce numbers ? Input quantities for the FEM-programs are often uncertain, so the obtained failure probabilities are wrong anyway.
Plato said that decisions should be based on knowledge, not on numbers. How to find knowledge instead of numbers?
This can be achieved studying the geometry of regions of the limit state surface where the failure events are most likely, i.e. near the beta points and the random distributions of the output quantities derived from those of the input quantities. This should result in finding structures which give information about the causes of failure.
So it might be worth a consideration to shift the focus in structural reliability from number crunching towards the detection and study of the geometric and probabilistic structures responsible for failures.
Quantification of Uncertainty Resulting from Microstructure Morphology Variation Based on Statistically Similar Representative Volume Elements
Prof. Daniel Balzani | TU Dresden; Dresden Center for Computational Materials Science (DCMS)
Improved design requirements led to the demand of materials that combine advantageous properties such as high strength and high ductility. This combination can be achieved by making use of pronounced microstructures. As the microstructure morphology is subjected to unavoidable variation, the macroscopic behavior is uncertain. Hence, when e.g. computing the fail-safety of structures made of those modern materials, it is reasonable to quantify the microstructure-based uncertainty. Instead of carrying out a large number of tests with specimens of the material for the quantification of the uncertainty we propose a numerical approach by exploiting the concept of Statistically Similar Representative Volume Elements (SSRVEs), cf.. SSRVEs consider artificial microstructure morphologies, which are fitted to the real microstructure in the sense of chosen higher-order statistical measures. By defining bounds on these statistical measures and considering all SSRVEs within these bounds it is possible to characterize the variation of the microstructure morphology. The resulting set of SSRVEs is used to perform a Monte-Carlo calculation in terms of Finite Elements to obtain homogenized properties of the material on the macroscale. Based on the resulting homogenized properties statistics of the macroscopic material parameters are quantified. Their statistical moments are dependent from the quantity of available and considered microstructure data and their statistical distribution. The Monte-Carlo computation is automatized with the application of an extended Finite Cell Method (FCM), cf., which allows the use of non-conforming meshes for each of the considered SSRVEs. The proposed method is demonstrated for an advanced high strength steel.
Multi-scale failure analysis with polymorphic uncertainties for optimal design of rotor blades
Wind turbine blades are thin-walled spatial structures typically consisting of two composite
shells and one or two shear webs assembled with adhesive bonds. Full-size mechanical tests of rotor blades are mandatory for certification but very costly.
The definition of representative sub-components typically involves expert knowledge on one hand and is impeded by limited information on specific physical parameters on the other hand, leading to polymorphic uncertainties.
As an important example of a sub-component, the Henkel beam has been developed
for testing adhesive bonds, which play a key role in structural integrity and reliability of rotor blades. Small defects, i.e. voids and delaminations, are common in the bond lines due to manufacturing and application process and can cause multiple tensile cracks and thus lead to macroscopic separation between spar cap and shear web.
Applying a transformation of stochastic microstructure to reference configuration, we build up a uncertain microscopic model in a UQ setting, that leads to a high dimensional problem.
By numerical upscaling we construct a statistical surrogate model using modern reduction methods, adaptivity and low-rank compression via hierachical tensor representation to overcome the curse of dimensionality. The structure of voids, encoded in oscillating parameter coefficients, is then resolved via multiscale FEM.
Deterioration affects all infrastructures that support transport networks and energy supply and can lead to damage and failure. The prediction of the deterioration effects in structural systems is associated with high uncertainty. Regular inspections and monitoring systems can reduce this uncertainty, at a cost. There is therefore an economic case to optimize the monitoring and maintenance plans. For multi-component systems, the number of inspection and maintenance strategies increases exponentially with the number of components, and identifying the strategy with the lowest total life-cycle cost is a PSPACE-complete problem. For realistic structures, the optimization problem can only be solved approximately. We propose to approximate the solution with a heuristics approach, where decision policies are defined by a set of parameters, both discrete – such as the number of components inspected – and continuous. The reliability of the system is computed by modeling the deterioration process for each component of the system with a hierarchical dynamic Bayesian network, where the states of the random variables are discretized. The probability of failure of the system is updated with the inspection and repair history by Bayesian inference, here reduced to linear matrix operations. The expected life-cycle cost of a decision policy is estimated with Monte Carlo simulation over the inspection and repair history. The fitness of the approximate solution for the optimization problem relies on the choice of the heuristics. In particular, the latter should take into account a proxy for the value of information of inspecting one particular component at a point in time, given the past inspection and repair history of the structure. To reduce the number of function evaluations, we also hypothesize that the objective function is convex.
Bayesian Updating of Rare Events with Meta Model-Based Reliability Methods
Data on the performance of engineering systems can be used to update system parameters and predictions of the system reliability through a Bayesian analysis. The recently introduced BUS (Bayesian Updating with Structural reliability methods) framework allows for the solution of Bayesian inference-type problems by means of structural reliability methods. In brief, it recasts acceptance-rejection sampling as a reliability problem. This bears two advantages: First, reliability methods such as subset simulation or importance sampling can be incorporated in order to maintain computational efficiency even for problems with peaky likelihood functions. Second, it allows for natively integrating data assimilation routines in the reliability analysis workflow by using the same method for both tasks. Since most reliability method require a considerable number of typically costly evaluations of the numerical model, it is desirable to employ meta model formulations that reproduce model outcomes as accurately as possible at a small fraction of the computational expenses. Meta model-based reliability methods employ a limited number of model evaluations to fit/learn the parameters of a mathematical model and then employ the latter model to perform the reliability assessment. We employ spectral decomposition-based meta models, which - up to slight deviations in formalism and construction - are based on the common concept of constructing functional bases with respect to the input random variable space and projecting the numerical model onto those using an experimental design. Combining reliability methods with spectral meta models within the BUS framework, we aim at obtaining accurate and efficient estimates of rare event probabilities conditional on system performance data.
Uncertainty reduction for complex systems with higher dimensional decomposed, optimal solution spaces
Complex engineering systems must meet several restrictions. These requirements are often unknown in early development phases and can only be estimated. Instead of searching for a single optimal and feasible design, the aim of the approach presented here is to find a set of feasible designs by identifying a so-called solution space. In recent related studies, the focus has been put on seeking optimal box-shaped solution spaces, which are maximal with respect to their volume and allow a decoupled consideration of all design variables. However, those boxes typically cover only a small portion of the complete solution space. Hence, accepting a certain coupling of corresponding design variables can reduce this loss of solution space. Mathematically spoken, a specific subset of the complete solution space is sought, which can be decomposed into higher dimensional sets where their Cartesian product has a maximal volume. For that purpose, an algorithm to solve this optimization problem is presented after a short theoretical introduction of the background context.
CROSS ENTROPY-BASED IMPORTANCE SAMPLING IN LOW AND HIGH DIMENSIONS WITH A NEW MIXTURE MODEL
In structural reliability analysis, the failure event of an engineering structure is defined through a potentially high-dimensional probability integral. In most cases, this integral cannot be solved analytically. The probability of failure is thus often estimated with Monte Carlo-based sampling approaches due to their ability to cope with complex numerical models. Alternatively, the importance sampling (IS) method significantly improves the efficiency of crude Monte Carlo, provided that an effective IS proposal density is chosen. The efficiency of IS with Gaussian proposal densities decreases with an increasing number of random variables; this is due to the fact that the probability mass in an equivalent high dimensional standard normal space concentrates around the surface of a hypersphere, known as the important ring. To account for this behavior, the von Mises-Fisher mixture model has been proposed as parametric density in IS for high-dimensional problems. The parameters of this distribution model can be estimated through application of the Cross Entropy (CE) method. The CE method is an adaptive sampling approach that determines the sampling density through minimizing the Kullback-Leibler divergence between the theoretically optimal IS density and a chosen parametric family of distributions. In this contribution, we combine the von Mises-Fisher distribution with the Nakagami distribution, which acts as the IS density for the radius of the hypersphere. For the parameter updating of the new mixture model within the CE method, we propose a modified version of the Expectation-Maximization (EM) algorithm and introduce the corresponding updating rules. Our study shows that the proposed mixture model enables efficient IS in both low- and high-dimensional component and system reliability problems.
Risk assessment with enhanced Bayesian network: application to hydropower station
Hector Diego Estrada- Lugo | University of Liverpool | United Kingdom
PhD EDOARDO PATELLI | Institute for Risk and Uncertainty | United Kingdom
Technological facilities might seriously be affected by extreme weather conditions (among other reasons) that can result in NaTech events (technological disasters triggered by natural threats). Also, the uncertainty factor associated with the global warming effect needs to be taken into account in the vulnerability quantification since it is not negligible. For these reasons, risk assessments should be carried out in order to evaluate causes of previously
occurred accidents or to provide information to prevent them from happening. The
Bayesian Network (BN) method has increasingly been used to perform risk assessments proving to be a reliable and powerful tool. However, this method is restricted to discrete and Gaussian distributions, which lead to discretization of continuous data and impoverishing of information. Furthermore, the high uncertainty of variables involved leads to
imprecise predictions or analyses with low-veracity results. In response to that, it has been found that BN enhanced with Reliability methods (EBN) helps to overcome limitations
of traditional BN and provide the quantification of the uncertainty affecting the outputs as result of the imprecision and random nature of available data. It is expected that discrete and continuous probabilistic distributions are implemented in the method. So that, the veracity of the risk assessment is as high as possible. The EBN method has been implemented in the general purpose software OpenCossan. In this project, climate factors like heavy raining, wind rate and flooding were considered
as conditions that can trigger severe accidents. Some data from the Lianghekou
hydropower station in the southwest of China are taken into account for this case study.
However, when information is missing, suitable data for this case have been selected from additional literature. With results obtained, future research will be done to overcome the limitations of the presented method.
Probabilistic reduced-order modeling for stochastic partial differential equations
Prof. Phaedon-Stelios Koutsourelakis | TU München | Germany
We discuss a Bayesian formulation to coarse-graining (CG) of PDEs where the coefficients (e.g. material parameters) exhibit random, fine scale variability. The direct solution to such problems requires grids that are small enough to resolve this fine scale variability which
unavoidably requires the repeated solution of very large systems of algebraic equations. We establish a physically inspired, data-driven coarse-grained model which learns an effective, low-dimensional representation of the underlying random field that is predictive of the fine-grained (FG) model response. This ultimately allows to replace the computationally expensive FG model by a generative probabilistic model based on evaluating the much cheaper CG model several times. Moreover, the model yields probabilistic rather than single-point predictions, which enables the quantification of the unavoidable epistemic uncertainty that is present due to the information loss that occurs during the coarse-graining process. This in turn allows for an adaptive refinement of the CG model discretization until a user-specified model accuracy is reached.
Interpolative Approach to UQ of Non-Smooth Random Quantities in the Nonlinear Schrodinger Equation
Prof. Adi Ditkowski | Tel Aviv University | Israel
We present a new interpolative, spectral approach for the numerical uncertainty quantification (UQ) of non-smooth properties of a random PDE. This approach is applied to the study of the Nonlinear Schrodinger equation (NLS) and to obtain new results in the field of nonlinear optics .
Given a nonlinear PDE with low-dimensional, smooth and random terms, the standard approach for computing the solution’s mean and moments is the polynomial chaos expansion (PCE). Due to the spectral nature of the PCE, it is poorly convergent for non-continuous functions. Our research suggests, however, the interpolant obtained by the PCE algorithm can be the departure point for a UQ method for non-smooth functions, when these depend on the smoothly-varying random solution.
The PCE interpolant of the NLS solution allows us to compute the statistical parameters of non-smooth global functionals of the solution, e.g., a coherent laser beam’s complex phase. We sample the PCE interpolant arbitrarily many times at a low computational cost. Exploiting the PCE’s pointwise convergence, we obtain “Monte-Carlo”-like results at the computational cost of a spectral method!
Computing the PCE interpolant can be more informative than the computed moments. For example, we study the random collision of solitons in the NLS. In this case, interpolating the solution in the random parameter space reveals that there is a finite number of possible outcomes for these collisions. A standard UQ analysis, which is based on moments and distributions, is therefore inadequate for this study.
For the computation of probability distribution function (PDF) of a non-smooth variable, we show that local interpolants of the random solution, such as hp-pc gPC and splines, are preferable. This is due to their less oscillatory behavior, and due to the PDF's sensitivity to small errors in close-to-zero derivatives.
Transdimensional MCMC algorithms for Bayesian inference of random fields
The reduction of uncertainty in model properties is of primary concern in the scientific and engineering community. Observed data can be used in combination with mathematical models to obtain information about these parameters. System properties often vary in space and random field discretization techniques are typically required for their representation. A major challenge is associated with the selection of the number of terms in the representation that provides an accurate approximation. In this context, hierarchical Bayesian inference provides a general framework that specifies a prior distribution for both, the number of terms in the random field discretization (dimension) and the uncertain parameters. This type of inferences can be handled using Markov chain Monte Carlo (MCMC) algorithms that are suitable to perform in spaces of varying dimensions and simultaneously update the dimension and parameters.
In this study, the Karhunen-Loève expansion is applied for the representation of the target random field, where the truncation of the expansion is considered as a hyperparameter. A two-stage MCMC and a Metropolis-within-Gibbs algorithms, that are able to explore the space by jumping between different dimensions, are used for the Bayesian analysis. Furthermore, a strategy to improve the efficiency of the algorithms is discussed. The proposed approach is tested using a cantilever beam with spatially variable flexibility in order to demonstrate its accuracy and performance.
 S. Cotter, G. Roberts, A. Stuart, and A. White (2013). “MCMC Methods for Functions: Modifying Old Algorithms to Make Them Faster”. In: Statistical Science 28.3, pp. 424–446.
 P. J. Green (2003). “Trans-dimensional Markov chain Monte Carlo”. In: Highly Structured Stochastic Systems. Oxford University Press, Chap. 6, pp. 179–198.