.. _method-multifidelity_function_train-p_refinement:

""""""""""""
p_refinement
""""""""""""


Automatic polynomial order refinement


.. toctree::
   :hidden:
   :maxdepth: 1

   method-multifidelity_function_train-p_refinement-uniform


**Specification**

- *Alias:* None

- *Arguments:* None

- *Default:* no refinement


**Child Keywords:**

+-------------------------+--------------------+--------------------+-----------------------------------------------+
| Required/Optional       | Description of     | Dakota Keyword     | Dakota Keyword Description                    |
|                         | Group              |                    |                                               |
+=========================+====================+====================+===============================================+
| Required                                     | `uniform`__        | Refine an expansion uniformly in all          |
|                                              |                    | dimensions.                                   |
+----------------------------------------------+--------------------+-----------------------------------------------+

.. __: method-multifidelity_function_train-p_refinement-uniform.html



**Description**


The ``p_refinement`` keyword specifies the usage of automated
polynomial order refinement, which can be either ``uniform`` or
``dimension_adaptive``.

The ``dimension_adaptive`` option is supported for
the tensor-product quadrature and Smolyak sparse grid options
and ``uniform`` is supported for tensor
and sparse grids as well as regression approaches ( ``collocation_points``
or ``collocation_ratio``).

Each of
these refinement cases makes use of the ``max_iterations`` and
``convergence_tolerance`` method independent controls.
The former control limits the number of refinement
iterations, and the latter control terminates refinement when the
two-norm of the change in the response covariance matrix (or, in
goal-oriented approaches, the two-norm of change in the statistical
quantities of interest (QOI)) falls below the tolerance.

The
``dimension_adaptive`` case can be further specified to utilize ``sobol``,
``decay``, or ``generalized`` refinement controls. The former two cases
employ anisotropic tensor/sparse grids in which the anisotropic
dimension preference (leading to anisotropic integrations/expansions
with differing refinement levels for different random dimensions) is
determined using either total Sobol' indices from variance-based
decomposition ( ``sobol`` case: high indices result in high dimension
preference) or using spectral coefficient decay rates from a rate
estimation technique similar to Richardson extrapolation ( ``decay``
case: low decay rates result in high dimension preference). In these
two cases as well as the ``uniform`` refinement case, the
``quadrature_order`` or ``sparse_grid_level`` are ramped by one on each
refinement iteration until either of the two convergence controls is
satisfied. For the ``uniform`` refinement case with regression
approaches, the ``expansion_order`` is ramped by one on each iteration
while the oversampling ratio (either defined by ``collocation_ratio``
or inferred from ``collocation_points`` based on the initial expansion)
is held fixed. Finally, the ``generalized`` ``dimension_adaptive`` case
is the default adaptive approach; it refers to the generalized sparse
grid algorithm, a greedy approach in which candidate index sets are
evaluated for their impact on the statistical QOI, the most
influential sets are selected and used to generate additional
candidates, and the index set frontier of a sparse grid is evolved in
an unstructured and goal-oriented manner (refer to User's Manual PCE
descriptions for additional specifics).

For the case of p_refinement or the case of an explicit nested override, Gauss-Hermite rules are replaced with Genz-Keister nested rules and Gauss-Legendre rules are replaced with Gauss-Patterson nested rules, both of which exchange lower integrand precision for greater point reuse.