Organization of Results
Currently, complete or nearly complete coverage of results from sampling, optimization and calibration methods, parameter studies, and stochastic expansions exists. Coverage will continue to expand in future releases to include not only the results of all methods, but other potentially useful information such as interface evaluations and model tranformations.
Methods in Dakota have a character string Id and are executed by Dakota one or more times. (Methods are executed more than once in studies that include a nested model, for example.) The Id may be provided by the user in the input file using the id_method keyword, or it may be automatically generated by Dakota. Dakota uses the label NO_METHOD_ID
for methods that are specified in the input file without an id_method
, and NOSPEC_METHOD_ID_<N>
for methods that it generates for its own internal use. The <N> in the latter case is an incrementing integer that begins at 1.
The results for the <N>th execution of a method that has the label <method Id> are stored in the group
/methods/<method Id>/results/execution:<N>/
The /methods group is always present in Dakota HDF5 files, provided at least one method added results to the output. (In a future Dakota release, the top level groups /interfaces and /models will be added.) The group execution:1 also is always present, even if there is only a single execution.
The groups and datasets for each type of result that Dakota is currently capable of storing are described in the following sections. Every dataset is documented in its own table. These tables include:
A brief description of the dataset.
The location of the dataset relative to
/methods/<method Id>/execution:<N>
. This path may include both literal text that is always present and replacement text. Replacement text is <enclosed in angle brackets and italicized>. Two examples of replacement text are <response descriptor> and <variable descriptor>, which indicate that the name of a Dakota response or variable makes up a portion of the path.Clarifying notes, where appropriate.
The type (String, Integer, or Real) of the information in the dataset.
The shape of the dataset; that is, the number of dimensions and the size of each dimension.
A description of the dataset’s scales, which includes  The dimension of the dataset that the scale belongs to.  The type (String, Integer, or Real) of the information in the scale.  The label or name of the scale.  The contents of the scale. Contents that appear in plaintext are literal and will always be present in a scale. Italicized text describes content that varies.  notes that provide further clarification about the scale.
A description of the dataset’s attributes, which are key:value pairs that provide helpful context for the dataset.
The Expected Output section of each method’s keyword documentation indicates the kinds of output, if any, that method currently can write to HDF5. These are typically in the form of bulleted lists with clariying notes that refer back to the sections that follow.
Study Metadata
Several pieces of information about the Dakota study are stored as attributes of the toplevel HDF5 root group (“/”). These include:
Label 
Type 
Description 

dakota_version 
String 
Version of Dakota used to run the study 
dakota_revision 
String 
Dakota version control information 
output_version 
String 
Version of the output file 
input 
String 
Dakota input file 
top_method 
String 
Id of the toplevel method 
total_cpu_time 
Real 
Combined parent and child CPU time in seconds 
parent_cpu_time 
Real 
Parent CPU time in seconds (when Dakota is built with UTILIB) 
child_cpu_time 
Real 
Child CPU time in seconds (when Dakota is built with UTILIB) 
total_wallclock_time 
Real 
Total wallclock time in seconds (when Dakota is built with UTILIB) 
mpi_init_wallclock_time 
Real 
Wallclock time to MPI_Init in seconds (when Dakota is built with UTILIB and run in parallel) 
run_wallclock_time 
Real 
Wallclock time since MPI_Init in seconds (when Dakota is built with UTILIB and run in parallel) 
mpi_wallclock_time 
Real 
Wallclock time since MPI_Init in seconds (when Dakota is not built with UTILIB and run in parallel) 
A Note about Variables Storage
Variables in most Dakota output (e.g. tabular data files) and input (e.g. imported data to construct surrogates) are listed in “input spec” order. (The variables keyword section is arranged by input spec order.) In this ordering, they are sorted first by function:
Design
Aleatory
Epistemic
State
And within each of these categories, they are sorted by domain:
Continuous
Discrete integer (sets and ranges)
Discrete string
Discrete real
A shortcoming of HDF5 is that datasets are homogeneous; for example, string and realvalued data cannot readily be stored in the same dataset. As a result, Dakota has chosen to flip “input spec” order for HDF5 and sort first by domain, then by function when storing variable information. When applicable, there may be as many as four datasets to store variable information: one to store continuous variables, another to store discrete integer variables, and so on. Within each of these, variables will be ordered by function.
Sampling Moments
sampling produces moments (e.g. mean, standard deviation or variance) of all responses, as well as 95% lower and upper confidence intervals for the 1st and 2nd moments. These are stored as described below. When sampling is used in incremental mode by specifying refinement_samples, all results, including the moments group, are placed within groups named increment:<N>, where <N> indicates the increment number beginning with 1.
Moments 


Description 
1st through 4th moments for each response 

Location 
[increment:<N>]/moments/<response descriptor> 

Notes 
The [increment:<N>] group is present only for sampling with refinement 

Shape 
1dimensional: length of 4 

Type 
Real 

Scales 

Moment Confidence Intervals 


Description 
Lower and upper 95% confidence intervals on the 1st and 2nd moments 

Location 
moment_confidence_intervals/<response descriptor> 

Shape 
2dimensional: 2x2 

Type 
Real 

Scales 

Correlations
A few different methods produce information about the correlations between pairs of variables and responses (collectively: factors). The four tables in this section describe how correlation information is stored. One important note is that HDF5 has no special, native type for symmetric matrices, and so the simple correlations and simple rank correlations are stored in dense 2D datasets.
Simple Correlations 


Description 
Simple correlation matrix 

Location 
[increment:<N>]/simple_correlations 

Notes 
The [increment:<N>] group is present only for sampling with refinement 

Shape 
2dimensional: number of factors by number of factors 

Type 
Real 

Scales 

Simple Rank Correlations 


Description 
Simple rank correlation matrix 

Location 
[increment:<N>]/simple_rank_correlations 

Notes 
The [increment:<N>] group is present only for sampling with refinement 

Shape 
2dimensional: number of factors by number of factors 

Type 
Real 

Scales 

Partial Correlations 


Description 
Partial correlations 

Location 
[increment:<N>]/partial_correlations/<response descriptor> 

Notes 
The [increment:<N>] group is present only for sampling with refinement 

Shape 
1dimensional: number of variables 

Type 
Real 

Scales 

Partial Rank Correlations 


Description 
Partial Rank correlations 

Location 
[increment:<N>]/partial_rank_correlations/<response descriptor> 

Notes 
The [increment:<N>] group is present only for sampling with refinement 

Shape 
1dimensional: number of variables 

Type 
Real 

Scales 

Probability Density
Some aleatory UQ methods estimate the probability density of resposnes.
Probability Density 


Description 
Probability density of a response 

Location 
[increment:<N>]/probability_density/<response descriptor> 

Notes 
The [increment:<N>] group is present only for sampling with refinement 

Shape 
1dimensional: number of bins in probability density 

Type 
Real 

Scales 

Level Mappings
Aleatory UQ methods can calculate level mappings (from userspecified probability, reliability, or generalized reliability to response, or vice versa).
Probability Levels 


Description 
Response levels corresponding to userspecified probability levels 

Location 
[increment:<N>]/probability_levels/<response descriptor> 

Notes 
The [increment:<N>] group is present only for sampling with refinement 

Shape 
1dimensional: number of requested levels for the response 

Type 
Real 

Scales 

Reliability Levels 


Description 
Response levels corresponding to userspecified reliability levels 

Location 
[increment:<N>]/reliability_levels/<response descriptor> 

Notes 
The [increment:<N>] group is present only for sampling with refinement 

Shape 
1dimensional: number of requested levels for the response 

Type 
Real 

Scales 

Generalized Reliability Levels 


Description 
Response levels corresponding to userspecified generalized reliability levels 

Location 
[increment:<N>]/gen_reliability_levels/<response descriptor> 

Notes 
The [increment:<N>] group is present only for sampling with refinement 

Shape 
1dimensional: number of requested levels for the response 

Type 
Real 

Scales 

Response Levels 


Description 
Probability, reliability, or generalized reliability levels corresponding to userspecified response levels 

Location 
[increment:<N>]/response_levels/<response descriptor> 

Notes 
The [increment:<N>] group is present only for sampling with refinement 

Shape 
1dimensional: number of requested levels for the response 

Type 
Real 

Scales 

VarianceBased Decomposition (Sobol’ Indices)
Dakota’s sampling method can produce main and total effects; stochastic expansions ( polynomial_chaos, stoch_collocation ) additionally can produce interaction effects.
Main Effects 


Description 
Firstorder Sobol’ indices 

Location 
main_effects/<response descriptor> 

Shape 
1dimensional: number of variables 

Type 
Real 

Scales 

Total Effects 


Description 
Totaleffect Sobol’ indices 

Location 
total_effects/<response descriptor> 

Shape 
1dimensional: number of variables 

Type 
Real 

Scales 

Each order (pair, 3way, 4way, etc) of interaction is stored in a separate dataset. The scales are unusual in that they are twodimensional to contain the labels of the variables that participate in each interaction.
Interaction Effects 


Description 
Sobol’ indices for interactions 

Location 
order_<N>_interactions/<response descriptor> 

Shape 
1dimensional: number of Nth order interactions 

Type 
Real 

Scales 

Integration and Expansion Moments
Stochastic expansion methods can obtain moments two ways.
Integration Moments 


Description 
Moments obtained via integration 

Location 
integration_moments/<response descriptor> 

Shape 
4 

Type 
Real 

Scales 

Expansion Moments 


Description 
Moments obtained via expansion 

Location 
expansion_moments/<response descriptor> 

Shape 
4 

Type 
Real 

Scales 

Extreme Responses
sampling with epistemic variables produces extreme values (minimum and maximum) for each response.
Extreme Responses 


Description 
The sample minimum and maximum of each response 

Location 
[increment:<N>]/extreme_responses/<response descriptor> 

Notes 
The [increment:<N>] group is present only for sampling with refinement 

Shape 
2 

Type 
Real 

Scales 

Parameter Sets
All parameter studies ( vector_parameter_study, list_parameter_study, multidim_parameter_study, centered_parameter_study ) record tables of evaluations (parameterresponse pairs), similar to Dakota’s tabular output file. Centered parameter studies additionally store evaluations in an order that is more natural to intepret, which is described below.
In the tabularlike listing, variables are stored according to the scheme described in a previous section.
Parameter Sets 


Description 
Parameter study evaluations in a tabularlike listing 

Location 
parameter_sets/{continuous_variables, discrete_integer_variables, discrete_string_variables, discrete_real_variables, responses} 

Shape 
2dimensional: number of evaluations by number of variables or responses 

Type 
Real, String, or Integer, as applicable 

Scales 

Variable Slices
Centered paramter studies store “slices” of the tabular data that make evaluating the effects of each variable on each response more convenient. The steps for each individual variable, including the initial or center point, and corresponding responses are stored in separate groups.
Variable Slices 


Description 
Steps, including center/initial point, for a single variable 
Location 
variable_slices/<variable descriptor>/steps 
Shape 
1dimensional: number of userspecified steps for this variable 
Type 
Real, String, or Integer, as applicable 
Variable Slices  Responses 


Description 
Responses for variable slices 

Location 
variable_slices/<variable descriptor>/responses 

Shape 
2dimensional: number of evaluations by number of responses 

Type 
Real 

Scales 

Best Parameters
Dakota’s optimization and calibration methods report the parameters at the best point (or points, for multiple final solutions) discovered. These are stored using the scheme decribed in the variables section. When more than one solution is reported, the best parameters are nested in groups named set:<N>, where <N> is a integer numbering the set and beginning with 1.
State (and other inactive variables) are reported when using objective functions and for some calibration studies. However, when using configuration variables in a calibration, state variables are suppressed.
Best Parameters 


Description 
Best parameters discovered by optimization or calibration 

Location 
[set:<N>]/best_parameters/{continuous, discrete_integer, discrete_string, discrete_real} 

Notes 
The [set:<N>] group is present only when multiple final solutions are reported. 

Shape 
1dimensional: number of variables 

Type 
Real, String, or Integer, as applicable 

Scales 

Best Objective Functions
Dakota’s optimization methods report the objective functions at the best point (or points, for multiple final solutions) discovered. When more than one solution is reported, the best objective functions are nested in groups named set:<N>, where <N> is a integer numbering the set and beginning with 1.
Best Objective Functions 


Description 
Best objective functions discovered by optimization 

Location 
[set:<N>]/best_objective_functions 

Notes 
The [set:<N>] group is present only when multiple final solutions are reported. 

Shape 
1dimensional: number of objective functions 

Type 
Real 

Scales 

Best Nonlinear Constraints
Dakota’s optimization and calibration methods report the nonlinear constraints at the best point (or points, for multiple final solutions) discovered. When more than one solution is reported, the best constraints are nested in groups named set:<N>, where N is a integer numbering the set and beginning with 1.
Best Nonlinear Constraints 


Description 
Best nonlinear constraints discovered by optimization or calibration 

Location 
[set:<N>]/best_constraints 

Notes 
The [set:<N>] group is present only when multiple final solutions are reported. 

Shape 
1dimensional: number of nonlinear constraints 

Type 
Real 

Scales 

Calibration
When using calibration terms with an optimization method, or when using a nonlinear least squares method such as nl2sol, Dakota reports residuals and residual norms for the best point (or points, for multiple final solutions) discovered.
Best Residuals 


Description 
Best residuals discovered 
Location 
best_residuals 
Shape 
1dimensional: number of residuals 
Type 
Real 
Best Residual Norm 


Description 
Norm of best residuals discovered 
Location 
best_norm 
Shape 
Scalar 
Type 
Real 
Parameter Confidence Intervals
Least squares methods (nl2sol, nlssol_sqp, optpp_g_newton) compute confidence intervals on the calibration parameters.
Parameter Confidence Intervals 


Description 
Lower and upper confidence intervals on calibrated parameters 

Location 
confidence_intervals 

Notes 
The confidence intervals are not stored when there is more than one experiment. 

Shape 
2dimensional: 2x2 

Type 
Real 

Scales 

Best Model Responses (without configuration variables)
When performing calibration with experimental data (but no configruation variables), Dakota records, in addition to the best residuals, the best original model resposnes.
Best Model Responses 


Description 
Original model responses for the best residuals discovered 

Location 
best_model_responses 

Shape 
1dimensional: number of model responses 

Type 
Real 

Scales 

Best Model Responses (with configuration variables)
When performing calibration with experimental data that includes configuration variables, Dakota reports the best model responses for each experiment. These results include the configuration variables, stored in the scheme described in the variables section, and the model responses.
Best Configuration Variables for Experiment 


Description 
Configuration variables associated with experiment N 

Location 
best_model_responses/experiment:<N>/{continuous_config_variables, discrete_integer_config_variables, discrete_string_config_variables, discrete_real_config_variables} 

Shape 
1dimensional: number of variables 

Type 
Real, String, or Integer, as applicable 

Scales 

Best Model Responses for Experiment 


Description 
Original model responses for the best residuals discovered 

Location 
best_model_responses/experiment:<N>/responses 

Shape 
1dimensional: number of model responses 

Type 
Real 

Scales 

Multistart and Pareto Set
The multi_start and pareto_set methods are metaiterators that control multiple optimization subiterators. For both methods, Dakota stores the results of the subiterators (best parameters and best results). For multi_start
, Dakota additionally stores the initial points, and for pareto_set
, it stores the objective function weights.
Starting Points (multi_start) 


Description 
Starting points for multi_start 

Location 
starting_points/continuous 

Notes 
Currently only continuous starting points are supported by multi_start 

Shape 
2dimensional: number of sets by number of variables 

Type 
Real 

Scales 

Weights (pareto_set) 


Description 
Response Weights for pareto_set 

Location 
weights 

Shape 
2dimensional: number of sets by number of responses 

Type 
Real 

Scales 

Best Parameters (multi_start or pareto_set) 


Description 
Best parameters discovered by multi_start or pareto_set 

Location 
best_parameters/{continuous, discrete_integer, discrete_string, discrete_real} 

Shape 
2dimensional: number of sets by number of variables 

Type 
Real, String, or Integer, as applicable 

Scales 

Best responses 


Description 
Best responses for multi_start and pareto_set 

Location 
best_responses 

Shape 
2dimensional: number of sets by number of responses 

Type 
Real 

Scales 
