estimator_variance

Variance of mean estimator within multilevel polynomial chaos

Specification

  • Alias: None

  • Arguments: None

Child Keywords:

Required/Optional

Description of Group

Dakota Keyword

Dakota Keyword Description

Optional

estimator_rate

Rate of convergence of mean estimator within multilevel polynomial chaos

Description

Multilevel Monte Carlo performs optimal resource allocation based on a known estimator variance for the mean statistic:

\[Var[\hat{Q}] = \frac{\sigma^2_Q}{N}\]

Replacing the simple ensemble average estimator in Monte Carlo with a polynomial chaos estimator results in a different and unknown relationship between the estimator variance and the number of samples. In one approach to multilevel PCE, we can employ a parameterized estimator variance:

\[Var[\hat{Q}] = \frac{\sigma^2_Q}{\gamma N^\kappa}\]

for free parameters \(\gamma\) and \(\kappa\) , with default values that may be overridden as part of this specification block.

This approach is supported for regression-based PCE approaches (over-determined least squares, under-determined compressed sensing, or othogonal least interpolation).

In practice, it can be challenging to estimate a smooth convergence rate for estimator variance in the presence of abrupt transitions in the quality of sparse recoveries. As a result, sample allocation by greedy refinement is generally preferred.

Examples

This example starts with sparse recovery for a second-order candidate expansion at each level. As the number of samples is adapted for each level, as dictated by the convergence of the estimator variance, the candidate expansion order is incremented as needed in order to synchronize with the specified collocation ratio.

method,
 model_pointer = 'HIERARCH'
 multilevel_polynomial_chaos
   orthogonal_matching_pursuit
   expansion_order_sequence = 2
   pilot_samples = 10
   collocation_ratio = .9
   allocation_control
     estimator_variance estimator_rate = 2.5
   seed = 1237
   convergence_tolerance = .01