multilevel_function_train
Multilevel uncertainty quantification using function train expansions
Specification
Alias: None
Arguments: None
Child Keywords:
Required/Optional 
Description of Group 
Dakota Keyword 
Dakota Keyword Description 

Optional 
Number of iterations allowed for optimizers and adaptive UQ methods 

Optional 
Sample allocation approach for multilevel expansions 

Optional 
Stopping criterion based on objective function or statistics convergence 

Optional 
define scaling of statistical metrics when adapting UQ surrogates 

Optional 
Formulation for emulation of model discrepancies. 

Optional 
An accuracy tolerance that is used to guide rounding during rank adaptation. 

Optional 
A secondary rounding tolerance used for postprocessing 

Optional 
Type of solver for forming function train approximations by regression 

Optional 
Maximum iterations in determining polynomial coefficients 

Optional 
Maximum number of iterations for crossapproximation during a rank adaptation. 

Optional 
Convergence tolerance for the optimizer used during the regression solve. 

Optional 
Perform boundsscaling on response values prior to surrogate emulation 

Optional 
Use subsampled tensorproduct quadrature points to build a polynomial chaos expansion. 

Optional 
Sequence of collocation point counts used in a multistage expansion 

Optional 
Set the number of points used to build a PCE via regression to be proportional to the number of terms in the expansion. 

Optional 
Sequence of start orders used in a multistage expansion 

Optional 
Activate adaptive procedure for determining the best basis order 

Optional 
increment used when adapting the basis order in function train methods 

Optional 
Maximum polynomial order of each univariate function within the functional tensor train. 

Optional 
Limit the number of crossvalidation candidates for basis order 

Optional 
Sequence of start ranks used in a multistage expansion 

Optional 
Activate adaptive procedure for determining best rank representation 

Optional 
The increment in rank employed during each iteration of the rank adaptation. 

Optional 
Limits the maximum rank that is explored during a rank adaptation. 

Optional 
Limit the number of crossvalidation candidates for rank 

Optional 
Number of samples at which to evaluate an emulator (surrogate) 

Optional 
Selection of sampling strategy 

Optional 
Selection of a random number generator 

Optional 
Allow refinement of probability and generalized reliability results using importance sampling 

Optional 
Output moments of the specified type and include them within the set of final statistics. 

Optional 
Values at which to estimate desired statistics for each response 

Optional 
Specify probability levels at which to estimate the corresponding response value 

Optional 
Specify reliability levels at which the response values will be estimated 

Optional 
Specify generalized relability levels at which to estimate the corresponding response value 

Optional 
Selection of cumulative or complementary cumulative functions 

Optional 
Activates global sensitivity analysis based on decomposition of response variance into main, interaction, and total effects 

Optional (Choose One) 
Covariance Type 
Display only the diagonal terms of the covariance matrix 

Display the full covariance matrix 

Optional 
Filename for points at which to evaluate the PCE/SC surrogate 

Optional 
Output file for surrogate model value evaluations 

Optional 
Sequence of seed values for multistage random sampling 

Optional 
Reuses the same seed value for multiple random sampling sets 

Optional 
Identifier for model block to be used by a method 
Description
As described in the function_train
method and the
function_train
model,
the function train (FT) approximation is a polynomial expansion that exploits low rank
structure within the mapping from input random variables to output quantities of interest
(QoI). For multilevel and multifidelity function train approximations, we decompose this
expansion into several constituent expansions, one per model form or solution control
level, where independent function train approximations are constructed for the
lowfidelity/coarse resolution model and one or more levels of model discrepancy.
In a threemodel case with lowfidelity (L), mediumfidelity (M), and highfidelity (H) models and an additive discrepancy approach, we can denote this as:
where \(\Delta^{ij}\) represents a discrepancy expansion computed from \(Q^i  Q^j\) and reduced rank representations of these discrepancies may be targeted ( \(r_{HM} < r_{ML} < r_L\) ).
In multilevel approaches, sample allocation for the constituent expansions is
performed as described in allocation_control
.
Expected HDF5 Output
If Dakota was built with HDF5 support and run with the
hdf5
keyword, this method
writes the following results to HDF5:
Integration and Expansion Moments (expansion moments only)
In addition, the execution group has the attribute equiv_hf_evals
, which
records the equivalent number of highfidelity evaluations.