Version 6.18 (2023/05/15)

Highlight: Generalized Approximate Control Variate Method for Multifidelity Sampling

Dakota can now search over directed acyclic graphs to identify the best model inter-relationships for multifidelity sampling.

Enabling / Accessing: As part of the

approximate_control_variate (ACV) method for multifidelity sampling, the new search_model_graphs option activates the generalized ACV capability ([BLWL22]) that identifies the most performant set of control variate pairings among the models in the multifidelity ensemble.

Documentation: Refer to DAG recursion types under

search_model_graphs.

Highlight: Updated User Resources

Dakota’s website has received a refresh. Documentation has moved to GitHub.io and Dakota downloads are now offered as GitHub Releases.

Enabling / Accessing:

Visit:

Improvements by Category

Interfaces, Input/Output

  • dprepro/pyprepro command delimiters can be specified within templates instead of only as command line arguments.

Models

  • Consolidation of ensemble models: Input specification and underlying C++ model classes for hierarchical and non-hierarchical multifidelity ensembles have been consolidated into a single ensemble specification and C++ class. This removes previous iterator-model alignment constraints and allows the retirement of the two-model control variate Monte Carlo (CVMC) implementation (which previously required a hierarchical model), now subsumed by multi-model multifidelity Monte Carlo (MFMC) (which previously required a non-hierarchical model).

  • Model recasting for alternate variable views: Active variable views are normally configured for the type of study or by explicit user override, but can now be recast for special use cases where the native variable view is insufficient. These recastings occur behind the scenes and can enable, for example, an all-view surrogate import within an active-view inference.

Optimization Methods

UQ Methods

  • Within Bayesian inference, one can now import a PCE emulator (defined by coefficients and multi-index), allowing for greater re-use of offline emulator construction. In particular, an emulator that is complex or expensive to build (e.g., adaptively-refined, cross-validated, multi-fidelity) can be constructed once and then reused for multiple inference studies while varying priors, observational datasets, etc.

  • These imported emulators can now span both calibration and configuration variables through the use of variable view recasting (see Models above).

  • Tolerance intervals [JR20] can now be computed using the sampling method.

MLMF Sampling

  • See Generalized ACV highlight at top.

  • Numerical solutions now utilize multiple optimizer solves by default, including both SQP and NIP solutions from two analytic initial guesses (MFMC and ensemble of two-model CVMC). The best of these solutions (highest accuracy or lowest cost, depending on formulation) is carried forward.

  • The default analytic solve option for MFMC has been modified in the multiple-QoI case to be more consistent with the numerical solve option. In particular, the constraint of performing an integrated sample increment across the QoI vector is now better reflected in the averaged performance metrics.

  • Teuchos matrix linear algebra for numerical solves in ACV and Generalized ACV has been refined through activation of factorWithEquilibration() and solveToRefinedSolution() options, improving numerical performance for larger model ensembles.

Sensitivity Analysis

  • Standardized regression coefficients can be optionally computed as part of a sampling study. These are computed from a linear regression model fit to sampling results for each response and requires Dakota to be built with the new surrogates module enabled. Console and hdf5 output (if enabled) include regression coefficients together with the coefficient of determination (R^2) as an indicator of goodness of fit of the linear regression model.

Miscellaneous Enhancements and Bugfixes

  • Enh: The text of links to Dakota keywords in the Keyword documentation was updated to be prettier.

  • Enh: Add find_dependency call for Boost to DakotaConfig.cmake to aid linking to Dakota library.

  • Bug fix: Correlation matrices now receive the correct variable labels in studies that include variables from more than one category (e.g. mixture of design and aleatory uncertain).

  • Bug fix: Standard moments are now written correctly to HDF5 for stochastic expansion methods. Previously central moments were always written, regardless of user selection.

  • Bug fix: Moments for stochastic expansions were written to HDF5 with erroneous dimension scale labels for many platforms. This issue has been fixed.

  • Bug fix: No datasets are written to HDF5 for PDFs for zero-variance responses. This matches the console output. Previously, empty datasets were written.

Deprecated and Changed

Compatibility

  • There are no changes to TPLs or requirements for this release.

Other Notes and Known Issues