.. _method-multilevel_sampling-allocation_target-variance:

""""""""
variance
""""""""


Fit MLMC sample allocation to control the variance of the estimator for the variance.


.. toctree::
   :hidden:
   :maxdepth: 1

   method-multilevel_sampling-allocation_target-variance-optimization


**Specification**

- *Alias:* None

- *Arguments:* None


**Child Keywords:**

+-------------------------+--------------------+--------------------+-----------------------------------------------+
| Required/Optional       | Description of     | Dakota Keyword     | Dakota Keyword Description                    |
|                         | Group              |                    |                                               |
+=========================+====================+====================+===============================================+
| Optional                                     | `optimization`__   | Solve the optimization problem for the sample |
|                                              |                    | allocation by numerical optimization in the   |
|                                              |                    | case of sampling estimator targeting the      |
|                                              |                    | variance.                                     |
+----------------------------------------------+--------------------+-----------------------------------------------+

.. __: method-multilevel_sampling-allocation_target-variance-optimization.html



**Description**


Computes the variance of the sampling estimator for the variance and fits sample allocation by solving the corresponding optimization problem. This optimization problem is obtained in closed form with an analytical approximation. Additionally, a numerical optimization can be used in that case, see ``optimization``.



**Examples**


The following method block

.. code-block::

    method,
     model_pointer = 'HIERARCH'
            multilevel_sampling
       pilot_samples = 20 seed = 1237
       convergence_tolerance = .01
       allocation_target = variance


uses the variance as sample allocation target by computing its variance.




**Theory**


A single level unbiased estimator for the variance of a generic QoI at the highest level :math:`M_L`  of the hierarchy can be written as

.. math::  \mathbb{V}ar\left[Q_{M_L}\right] \approx \frac{1}{N_{M_L} - 1} \sum_{i=1}^{N_{M_L}} \left( Q_{M_L}^{(i)} - \mathbb{E}\left[Q_L\right] \right)^2. 

The multilevel extension for this estimator is obtained by writing

.. math::  \mathbb{V}ar\left[Q_L\right] \approx \hat{Q}_{L,2}^{\mathrm{ML}} = \sum_{\ell=0}^L \hat{Q}_{\ell,2} - \hat{Q}_{\ell-1,2}, 

where

.. math::  \hat{Q}_{\ell,2} = \frac{1}{N_\ell - 1} \sum_{i=1}^{N_\ell} \left( Q_\ell^{(i)} - \hat{Q}_\ell \right)^2
   \mathrm{\quad  and \quad}
   \hat{Q}_{\ell - 1,2} = \frac{1}{N_\ell - 1} \sum_{i=1}^{N_\ell} \left( Q_{\ell - 1}^{(i)} - \hat{Q}_{\ell - 1} \right)^2. 

As for the expected value case, we want to obtain an optimal sample allocation per level that minimizes the cost to obtain an estimator with a prescribed variance. The variance of the multilevel estimator :math:`\hat{Q}_{L,2}^{\mathrm{ML}}`  can be written as

.. math::  \mathbb{V}ar\left[ \hat{Q}_{\ell,2} \right] \approx \frac{1}{N_\ell} \left( \hat{Q}_{\ell,4} - \frac{N_\ell-3}{N_\ell-1} \left(\hat{Q}_{\ell,2}\right)^2 \right), 

where :math:`\hat{Q}_{\ell,4}`  denotes the sampling estimator for the fourth order central moment. For more details about the expression that each single term
takes in the previous expression, please refer to the Theory Manual.

The final sample allocation is obtained by solving a minimization problem

.. math::  \min\limits_{N_\ell} \sum_{\ell=0}^L \mathcal{C}_{\ell} N_\ell \quad \mathrm{s.t.} \quad \mathbb{V}ar\left[ \hat{Q}_{L,2}^{\mathrm{ML}} \right] = \varepsilon^2/2. 

This optimization problem can be solved in two different ways, namely an analytical approximation and by solving a non-linear optimization problem.
The analytical approximation follows the approach described in [Pisaroni2017] and introduces a helper variable

.. math::  \hat{V}_{2, \ell} := \mathbb{V}ar\left[ \hat{Q}_{\ell,2} \right] dot ``N_{``\ell}, 

and the minimization problem is formulated as

.. math::  f(N_\ell,\lambda) = \sum_{\ell=0}^{L} N_\ell \, \mathcal{C}_{\ell}
                   + \lambda \left( \sum_{\ell=0}^{L} N_\ell^{-1} \hat{V}_{2, \ell} - \varepsilon^2/2 \right). 

This formulation has a closed form solution (similarly to the expected value case)

.. math::  N_{\ell} = \frac{2}{\varepsilon^2} \left[ \, \sum_{k=0}^L \left( \hat{V}_{2, k} \mathcal{C}_k \right)^{1/2} \right]
               \sqrt{\frac{ \hat{V}_{2, \ell} }{\mathcal{C}_{\ell}}}. 

If the option :dakkw:`method-multilevel_sampling-allocation_target-variance-optimization` is specified the previous optimization problem is solved numerically via
either OPTPP (default choice) or NPSOL (if available).