.. _method-sampling-variance_based_decomp: """"""""""""""""""""" variance_based_decomp """"""""""""""""""""" Activates global sensitivity analysis based on decomposition of response variance into contributions from variables .. toctree:: :hidden: :maxdepth: 1 method-sampling-variance_based_decomp-drop_tolerance method-sampling-variance_based_decomp-vbd_sampling_method **Specification** - *Alias:* None - *Arguments:* None - *Default:* no variance-based decomposition **Child Keywords:** +-------------------------+--------------------+-------------------------+-----------------------------------------------+ | Required/Optional | Description of | Dakota Keyword | Dakota Keyword Description | | | Group | | | +=========================+====================+=========================+===============================================+ | Optional | `drop_tolerance`__ | Suppresses output of sensitivity indices with | | | | values lower than this tolerance | +----------------------------------------------+-------------------------+-----------------------------------------------+ | Optional | `vbd_sampling_method`__ | The method to use for variance-based | | | | decomposition | +----------------------------------------------+-------------------------+-----------------------------------------------+ .. __: method-sampling-variance_based_decomp-drop_tolerance.html __ method-sampling-variance_based_decomp-vbd_sampling_method.html **Description** Dakota can calculate sensitivity indices through variance based decomposition using the keyword ``variance_based_decomp``. These indicate how important the uncertainty in each input variable is in contributing to the output variance. *Default Behavior* Variance based decomposition is turned off as a default. *Note that specifying this keyword with the default* ``vbd_sampling_method`` *of* ``pick_and_freeze`` *selected will increase the number of function evaluations above the number requested with the* ``samples`` *keyword since replicated sets of sample values are evaluated.* See documentation for ``pick_and_freeze`` on the increased sample size. *Expected Output* When ``variance_based_decomp`` is specified, Sobol' sensitivity indices will be reported. Sobol' indices represent the percent of the variance in the model response that can be attributed to each individual variable. *Usage Tips* To obtain sensitivity indices that are reasonably accurate, we recommend that N, the number of samples, be at least one hundred and preferably several hundred or thousands. **Examples** .. code-block:: method, sampling sample_type lhs samples = 100 variance_based_decomp **Theory** In this context, we take sensitivity analysis to be global, not local as when calculating derivatives of output variables with respect to input variables. Our definition is similar to that of :cite:p:`Sal04`: "The study of how uncertainty in the output of a model can be apportioned to different sources of uncertainty in the model input." Variance based decomposition is a way of using sets of samples to understand how the variance of the output behaves, with respect to each input variable. Two types of variance-based sensitivity indices are commonly computed: main effects and total effects indices. Main effects (roughly) represent the percent contribution of each individual variable to the variance in the model response. Total effects represent the percent contribution of each individual variable in combination with all other variables to the variance in the model response. For the variable :math:`X_i`, the main and total effect indices are denoted here :math:`S_i` and :math:`T_i`, respectively, and their formulas are .. math:: S_i = \frac{V(E_{\mathbf{X}_{\sim i}}(f(\mathbf{X})|X_i))}{V(f)} \\ T_i = \frac{E(V_{X_{i}}(f(\mathbf{X})|\mathbf{X}_{\sim i}))}{V(f)}, where :math:`\mathbf{X}_{\sim i}` denotes all variables except :math:`X_i`. A larger sensitivity index means that the uncertainty in the input variable :math:`X_i`` has a larger effect on the variance of the output. More details on the calculations and interpretation of the sensitivity indices can be found in :cite:p:`Sal04` and :cite:p:`Weirs10`. Two methods to compute variance-based sensitivity indices are implemented in Dakota. The default "pick-and-freeze" method is based on the algorithm discussed in :cite:p:`Weirs10`. Pick-and-freeze methods are currently the most popular approach for varianced-based sensitivity index computation, but they incur significant computational cost. These approaches rely on structured sampling wherein two independent random sample sets of the input variables are generated, then the random samples of the variable whose sensitivity index is being computed are substituted from one sample set into the other. Specifically, if the user specified a number of samples, N, and a number of nondeterministic variables, M, pick-and-freeze variance-based decomposition requires the evaluation of N*(M+2) samples. The "binned" method uses the N samples generated during the sampling study directly to computed variance-based sensitivity indices, but it can only compute main effect indices, so the user will not be provided with information from the total effect indices about the percent variance of the model response arising from interactions between each variable and all other variables. However, the main effect indices can be computed with no further model evaluations beyond the N samples already specified for the sampling study. The binned approach is detailed in :cite:p:`Li16`.