surrogate_based_global
Adaptive Global Surrogate-Based Optimization
Topics
surrogate_based_optimization_methods
Specification
Alias: None
Arguments: None
Child Keywords:
Required/Optional |
Description of Group |
Dakota Keyword |
Dakota Keyword Description |
---|---|---|---|
Required (Choose One) |
Sub-method Selection |
Pointer to sub-method to apply to a surrogate or branch-and-bound sub-problem |
|
Specify sub-method by name |
|||
Required |
Identifier for model block to be used by a method |
||
Optional |
(Recommended) Replace points in the surrogate training set, instead of appending |
||
Optional |
Number of iterations allowed for optimizers and adaptive UQ methods |
Description
The surrogate_based_global
method iteratively performs optimization
on a global surrogate using the same bounds during each iteration.
In one iteration, optimal solutions are found on the surrogate model, and a subset of these are passed to the next iteration.
At the next iteration, these surrogate-optimal variable sets are evaluated with the “truth” model, and added to the set of points over which the next surrogate is constructed.
In this way, the optimization operates on a more accurate surrogate
during each iteration, presumably driving to optimality quickly. In
contrast to surrogate_based_local
, this approach has no
guarantee of convergence.
Usage Tips
Attention: This adaptive method is not recommended for
“build-once” surrogates trained from (static) imported data or trained
online using a single Dakota design of experiments. Instead, any
Dakota optimization method can be used with a (build-once) global
surrogate by specifying the id_model
of a global surrogate model
with the optimizer’s model_pointer
keyword.
Configuring the method:
The sub-method, specified with either
method_pointer
ormethod_name
should typically be an optimizer that returns multiple final solutions, such asmoga
orsoga
.The
model_pointer
keyword must identify a surrogate model which includes an underlying truth model, typically viatruth_model_pointer
ordace_method_pointer
Workflow:
One might first try a single minimization method coupled with a surrogate model prior to using this surrogate-based global method. This is essentially equivalent to setting
max_iterations
to 1 and will allow one to get a sense of what surrogate types are the most accurate to use for the problem.Consider starting with a small number of maximum iterations, such as 3–5, to get a sense of how the optimization evolves as the surrogate gets updated. If it is still changing significantly, then a larger number (used in combination with restart) may be needed.
Surrogates can be built for all primary functions and constraints or for only a subset of these functions and constraints. This allows one to use a “truth” model directly for some of the response functions, perhaps due to them being much less expensive than other functions.
Known Issue: When using discrete variables, there have been sometimes significant differences in surrogate behavior observed across computing platforms in some cases. The cause has not yet been fully diagnosed and is currently under investigation. In addition, guidance on appropriate construction and use of surrogates with discrete variables is under development. In the meantime, users should therefore be aware that there is a risk of inaccurate results when using surrogates with discrete variables.
Expected HDF5 Output
If Dakota was built with HDF5 support and run with the
hdf5
keyword, this method
writes the following results to HDF5:
Best Objective Functions (when
objective_functions
) are specified)Calibration (when
calibration_terms
are specified)
Theory
In surrogate-based optimization (SBO) and surrogate-based nonlinear least squares (SBNLS), minimization occurs using a set of one or more approximations, defined from a surrogate model, that are built and periodically updated using data from a “truth” model. The surrogate model can be a global data fit (e.g., regression or interpolation of data generated from a design of computer experiments), a multipoint approximation, a local Taylor Series expansion, or a model hierarchy approximation (e.g., a low-fidelity simulation model), whereas the truth model involves a high-fidelity simulation model. The goals of surrogate-based methods are to reduce the total number of truth model simulations and, in the case of global data fit surrogates, to smooth noisy data with an easily navigated analytic function.
The surrogate_based_global
method was originally designed for MOGA
(a multi-objective genetic algorithm). Since genetic algorithms often
need thousands or tens of thousands of points to produce optimal or
near-optimal solutions, the use of surrogates can be helpful for
reducing the truth model evaluations. Instead of creating one set of
surrogates for the individual objectives and running the optimization
algorithm on the surrogate once, the idea is to select points along
the (surrogate) Pareto frontier, which can be used to supplement the
existing points.
In this way, one does not need to use many points initially to get a very accurate surrogate. The surrogate becomes more accurate as the iterations progress.