multifidelity_function_train

Multifidelity uncertainty quantification using function train expansions

Specification

  • Alias: None

  • Arguments: None

Child Keywords:

Required/Optional

Description of Group

Dakota Keyword

Dakota Keyword Description

Optional

p_refinement

Automatic polynomial order refinement

Optional

max_refinement_iterations

Maximum number of expansion refinement iterations

Optional

convergence_tolerance

Stopping criterion based on objective function or statistics convergence

Optional

metric_scale

define scaling of statistical metrics when adapting UQ surrogates

Optional

statistics_mode

type of statistical metric roll-up for multifidelity UQ methods

Optional

allocation_control

Sample allocation approach for multifidelity expansions

Optional

discrepancy_emulation

Formulation for emulation of model discrepancies.

Optional

rounding_tolerance

An accuracy tolerance that is used to guide rounding during rank adaptation.

Optional

arithmetic_tolerance

A secondary rounding tolerance used for post-processing

Optional

regression_type

Type of solver for forming function train approximations by regression

Optional

max_solver_iterations

Maximum iterations in determining polynomial coefficients

Optional

max_cross_iterations

Maximum number of iterations for cross-approximation during a rank adaptation.

Optional

solver_tolerance

Convergence tolerance for the optimizer used during the regression solve.

Optional

response_scaling

Perform bounds-scaling on response values prior to surrogate emulation

Optional

tensor_grid

Use sub-sampled tensor-product quadrature points to build a polynomial chaos expansion.

Optional

collocation_points_sequence

Sequence of collocation point counts used in a multi-stage expansion

Optional

collocation_ratio

Set the number of points used to build a PCE via regression to be proportional to the number of terms in the expansion.

Optional

start_order_sequence

Sequence of start orders used in a multi-stage expansion

Optional

adapt_order

Activate adaptive procedure for determining the best basis order

Optional

kick_order

increment used when adapting the basis order in function train methods

Optional

max_order

Maximum polynomial order of each univariate function within the functional tensor train.

Optional

max_cv_order_candidates

Limit the number of cross-validation candidates for basis order

Optional

start_rank_sequence

Sequence of start ranks used in a multi-stage expansion

Optional

adapt_rank

Activate adaptive procedure for determining best rank representation

Optional

kick_rank

The increment in rank employed during each iteration of the rank adaptation.

Optional

max_rank

Limits the maximum rank that is explored during a rank adaptation.

Optional

max_cv_rank_candidates

Limit the number of cross-validation candidates for rank

Optional

samples_on_emulator

Number of samples at which to evaluate an emulator (surrogate)

Optional

sample_type

Selection of sampling strategy

Optional

rng

Selection of a random number generator

Optional

probability_refinement

Allow refinement of probability and generalized reliability results using importance sampling

Optional

final_moments

Output moments of the specified type and include them within the set of final statistics.

Optional

response_levels

Values at which to estimate desired statistics for each response

Optional

probability_levels

Specify probability levels at which to estimate the corresponding response value

Optional

reliability_levels

Specify reliability levels at which the response values will be estimated

Optional

gen_reliability_levels

Specify generalized relability levels at which to estimate the corresponding response value

Optional

distribution

Selection of cumulative or complementary cumulative functions

Optional

variance_based_decomp

Activates global sensitivity analysis based on decomposition of response variance into main, interaction, and total effects

Optional (Choose One)

Covariance Type

diagonal_covariance

Display only the diagonal terms of the covariance matrix

full_covariance

Display the full covariance matrix

Optional

import_approx_points_file

Filename for points at which to evaluate the PCE/SC surrogate

Optional

export_approx_points_file

Output file for surrogate model value evaluations

Optional

seed_sequence

Sequence of seed values for multi-stage random sampling

Optional

fixed_seed

Reuses the same seed value for multiple random sampling sets

Optional

model_pointer

Identifier for model block to be used by a method

Description

As described in the method-function_train method and the model-surrogate-global-function_train model, the function train (FT) approximation is a polynomial expansion that exploits low rank structure within the mapping from input random variables to output quantities of interest (QoI). For multilevel and multifidelity function train approximations, we decompose this expansion into several constituent expansions, one per model form or solution control level, where independent function train approximations are constructed for the low-fidelity/coarse resolution model and one or more levels of model discrepancy.

In a three-model case with low-fidelity (L), medium-fidelity (M), and high-fidelity (H) models and an additive discrepancy approach, we can denote this as:

\[Q^H \approx \hat{Q}_{r_L}^L + \hat{\Delta}_{r_{ML}}^{ML} + \hat{\Delta}_{r_{HM}}^{HM}\]

where \(\Delta^{ij}\) represents a discrepancy expansion computed from \(Q^i - Q^j\) and reduced rank representations of these discrepancies may be targeted ( \(r_{HM} < r_{ML} < r_L\) ).

In multifidelity approaches, sample allocation for the constituent expansions can be performed with either no, individual, or integrated adaptive refinement as described in method-multifidelity_function_train-allocation_control.

Expected HDF5 Output

If Dakota was built with HDF5 support and run with the environment-results_output-hdf5 keyword, this method writes the following results to HDF5:

In addition, the execution group has the attribute equiv_hf_evals, which records the equivalent number of high-fidelity evaluations.