One of the most critical hypothesis in uncertainty quantification studies is the choice of the distributions of uncertain input variables which are propagated through the numerical model. In general, such pdf come from various sources (statistical inference, design or operation rules, expert judgment, calibration, etc.), and are then established with a certain level of accuracy or confidence. Moreover, in many applications, related for example to industrial safety, engineers are not able to assign a given probability distribution to some of the inputs. This happens for example for inputs corresponding to physical parameters for which no data are available.
Hence, bringing stringent justifications to the overall approach requires quantifying the impact of the pdf modeling assumptions on the quantity of interest (QoI). In this context, the “input pdf robustness analysis”, has been recently defined as a particular setting of the sensitivity analysis domain (like the screening one or the quantitative partitioning one). Various QoI can be considered, as the mean of the model output, its variance, a probability that the output exceeds a threshold, a quantile of the output or even sensitivity indices.
This Minisymposium, which will be held in two parts (4 presentations in each part), aims at presenting several recent theoretical developments on this subject, as well as practical and industrial issues.
An informative law perturbation approach in Robustness Analysis
Roman Sueur | EDF R&D | France
Robustness analysis (RA), which aims to quantify the impact of a lack of knowledge about the input law of a UQ model, has been an emerging field for several years. Basically, RA methods consist in considering a whole set of potential input laws, by changing one or several parameters of a reference probabilistic model, and analysing the related output distributions. In this talk we will first present the different industrial risks management problematics these methods could allow to address. Then we will examine how RA can be deemed a new area and differs from global sensitivity analysis (GSA) from which it derives. We will subsequently try to identify the specific methodological challenges raised in the RA framework.
Finally, we will present a new approach to define the variation of the input distribution one has to introduce when implementing a RA methodology. As a matter of fact, any robustness study implicitly establish a metric on the chosen set of input laws, as the pursued goal is to compare the output variations caused by error terms on different input distributions. However, none of the law perturbation methods proposed so far allows a satisfactory comparison between heterogeneous and incommensurable physical inputs. Hence we introduce an approach based on information geometry which provides a coherent equivalence criterion for variations of the laws of different inputs, but also on different parameters of a particular input distribution.
Robustness of Sobol’ indices to distributional uncertainty
Joseph Hart | Sandia National Laboratories | United States
Global sensitivity analysis (GSA) quantifies the influence of uncertain variables in a mathematical model. The Sobol’ indices, a commonly used tool in GSA, seek to do this by attributing to each variable its relative contribution to the variance of the model output. In order to compute Sobol’ indices, the user must specify a probability distribution for the uncertain variables. This distribution is typically unknown and must be chosen using limited data and/or knowledge. The usefulness of the Sobol’ indices depends on their robustness to this distributional uncertainty. This talk presents a method which uses “optimal perturbations” of the probability density function to analyze the robustness of the Sobol’ indices. The proposed method is a post-processing step after estimating the Sobol’ indices. For typical applications with computationally intensive model evaluations, the robustness analysis requires negligible computational cost in comparison to estimating the Sobol’ indices.
Quantification of the impact of an imprecise specification of input distributions on reliability analysis results
Guillaume Perrin | CEA/DAM | France
The role of simulation keeps increasing for the reliability analysis of complex systems. Most of the time, these analyses can be reduced to estimating the probability of occurrence of an undesirable event, also called failure probability, using a stochastic model of the system. If the considered event is rare, sophisticated sample-based procedures are generally introduced to get a relevant estimate of the failure probability.
Starting from the observation that the results of such an analysis are completely governed by the choice of the distributions of the simulation inputs, this work proposes new reliability-oriented sensitivity indices (ROSI) to identify the model inputs whose distributions have to be particularly well-characterized for the reliability analysis to be relevant.
In particular, it shows how these ROSI can be computed as a simple post-treatment of the code evaluations required for the failure probability estimation, and, when the true model is replaced by a surrogate model, how the surrogate model uncertainty can also be taken into account in their estimation.
Finally, the interest of the proposed ROSI is illustrated on a series of numerical and industrial examples.
Sensitivity analysis with dependence measures under uncertainty of input distribution
Amandine Marrel | CEA/DEN | France
In the probabilistic framework, the uncertainties on the inputs of numerical simulators are modelled by fully or partially known probability distributions. For given probability distributions, global sensitivity analysis (GSA) aims at studying the impact of the input uncertainties on the output. For this, some kernel-based dependence measures (namely Hilbert-Schmidt Independence Criterion denoted HSIC), are very efficient statistical tools.
When the probability distributions of inputs are themselves uncertain, it is relevant to quantify their global impact on GSA results, to identify the most influential ones and those whose influence is negligible. We will call it here the second-level global sensitivity analysis (GSA2). For this, we propose a new single Monte Carlo loop methodology, based on HSIC of 1st and 2nd-level. From a unique sample of inputs/output drawn from a well-chosen probability distribution of inputs, HSIC-based GSA is performed for various assumed probability distributions of inputs, by using weighted HSIC measures estimators. Then, 2nd-level HSIC measures between the probability distributions of inputs and GSA results are defined and constitute GSA2 indices.
Finally, the most influential 2nd-level uncertainties (identified from GSA2) can be aggregated into a unique GSA by integrating GSA1 results over the uncertain input distributions.
The whole methodology is illustrated on a test case simulating a severe accidental scenario on nuclear reactor.