Global sensitivity analysis (GSA) aims at quantifying how the uncertainty in the output quantity of interest (QoI) of mathematical models can be apportioned to uncertainties in the input model parameters. Specifically, variance-based GSA enables ranking the importance of model inputs by computing their relative contribution to the variance of the QoI, as quantified by Sobol’ indices. In the presence of dependences among input parameters, Sobol’ indices may be difficult to interpret and one may prefer to compute Shapley effects, which propose an equitable sharing of the model output variance, and which were originated as a solution concept in cooperative game theory. An alternative approach is derivative-based GSA, which allows an efficient screening for unimportant input parameters. In many applications, the models under study involve many input parameters, which are not necessarily independent, and produce several QoIs, which may be scalar, but also vectorial or even functional. The purpose of that MS is to present recent generalizations of both variance-based GSA and derivative-based GSA in that framework. These recent results are validated on toy models but also applied to large-scale applications, such as high dimensional neuroscience models or avalanche models.
10:30
Gradient-based dimension reduction of multivariate vector-valued functions
Clémentine Prieur | Grenoble Alpes University, LJK-lab, Inria research team AIRSEA | France
Show details
Authors:
Olivier Zahm | Grenoble Alpes University, Inria Grenoble Rhône-Alpes, LJK-lab, Inria project team AIRSEA | France
Paul Constantine | University of Colorado Boulder | United States
Clémentine Prieur | Grenoble Alpes University, LJK-lab, Inria research team AIRSEA | France
Youssef Marzouk | Massachusetts Institute of Technology | United States
Many algorithm aimed at solving Uncertainty Quantification problems have a cost which grows dramatically with the dimension of the input parameter space.
However, many multivariate functions are varying primarily along a few directions of the input parameter space. It is then important to identify and exploit that intrinsic dimension.
A common approach to reducing a function's input dimension is the truncated Karhunen-Loève decomposition, which exploits the correlation structure of the function's input space. In the present work, we exploit not only input correlations but also the structure of the input-output map itself.
Active subspaces are defined as eigenspaces of the average outer product of the function's gradient with itself. They capture the directions along which the function varies the most, in the sense of its output responding most strongly to input perturbations, in expectation over the input measure. In the present work, we generalize the notion of active subspaces to vector-valued functions. We also provide in that framework lower (upper) bound for first-order (total) Sobol’ indices.
11:00
Using fuzzy clustering for improving the interpretability of multivariate sensitivity analysis
Sebastien Roux | INRA | France
Show details
Authors:
Sebastien Roux | INRA | France
Matieyendou Lamboni | Université de Guyane | France
François Lafolie | INRA | France
Samuel Buis | INRA | France
Sensitivity analyses of models having temporal or spatial outputs require the use of Multivariate Sensitivity Analysis (MSA) methods. Many of them are based on a projection step where model outputs are projected on an orthogonal basis. Their practical application may therefore be limited by the difficulty to select an orthogonal basis which allows extracting relevant and interpretable information on multivariate outputs. On the other hand clustering techniques, which are designed to identify groups of similar objects, may be particularly adapted to capture various behaviors in model multivariate outputs.
In this work, we propose to use a fuzzy clustering procedure to enhance results of MSA on model with multivariate outputs. The main idea relies on the extensive use of the output of fuzzy clustering methods: the Membership Functions (MF, valued in [0,1]), which quantify for any model response the degree of membership to each cluster. Noting that MF can be seen as an extension of coordinates within a basis in classical MSA methods, we introduce new sensitivity indices based on clustered outputs:
- Clust-SI : Sensitivity Indices on a MF of a given cluster,
- dClust-SI : Sensitivity Indices on pair-wise MF differences,
- Clust-GSI: Generalized Sensitivity indices computed on a MF vector.
We present the computation of these indices on a toy example and on a realistic model and show that the models behaviors can be efficiently reported by the newly proposed indices.
11:30
Global sensitivity analysis of high dimensional neuroscience models
Pierre Gremaud | NC State University, Department of Mathematics | United States
Show details
Authors:
Pierre Gremaud | NC State University, Department of Mathematics | United States
Tim David | University of Canterbury | New Zealand
Joseph Hart | Sandia, NL | United States
Robin Morillo | NC State University | United States
The complexity and size of state-of-the-art cell models have significantly increased in part due to the requirement that these models possess complex cellular functions which are thought--but not necessarily proven--to be important. Modern cell models often involve hundreds of parameters; the values of these parameters come, more often than not, from animal experiments whose relationship to the human physiology is weak with very little information on the errors in these measurements. The concomitant uncertainties in parameter values result in uncertainties in the model outputs or Quantities of Interest (QoIs). New Global Sensitivity Analysis (GSA) approaches are required to deal with increased model size and complexity; a three stage methodology consisting of screening (dimension reduction), surrogate modeling, and computing Sobol' indices, is presented. The methodology is used to analyze a physiologically validated numerical model of neurovascular coupling which possess hundreds of uncertain parameters.
12:00
Aggregated Shapley effects from an acceptance-rejection sample: application to an avalanche model
Maria Belen Heredia | Grenoble Alpes University, IRSTEA, LJK-lab | France
Show details
Authors:
Maria Belen Heredia | Grenoble Alpes University, IRSTEA, LJK-lab | France
Clémentine Prieur | Grenoble Alpes University, LJK-lab, Inria research team AIRSEA | France
Nicolas Eckert | Grenoble Alpes University, IRSTEA | France
Avalanche models depend on poorly known inputs and produce both scalar and functional outputs (e.g., velocity and snow depth along a path). We aim at performing a sensitivity analysis for such models. For that purpose, we first sample inputs with corresponding outputs. Given a portion of an avalanche corridor, we want to keep in our sample only the avalanches which have started upstream and which are going on downstream of this portion. To achieve this aim, an acceptance-reject algorithm is run. As a result, the inputs corresponding to the final sample are dependent. Shapley effects, based on an equitable sharing of the model output variance, have been recently proposed as sensitivity measures, which are still meaningful when inputs are dependent. For this reason, their use is appropriate to our application. Even if it is possible to estimate the full set of Shapley effects for each scalar output and each component of the discretized functional outputs, we prefer summarizing the information by computing d scalar aggregated Shapley effects, with d the input space dimension. To reduce the estimation cost, we add a preliminary dimension reduction step of the discretized functional outputs. This task is achieved by using principal components analysis (PCA). The accuracy of the estimation of aggregated Shapley effects with respect to the sample size and with respect to the truncation argument in the PCA is numerically studied for several test-cases.