numerical_hessians
Hessians are needed and will be approximated by finite differences
Specification
Alias: None
Arguments: None
Child Keywords:
Required/Optional |
Description of Group |
Dakota Keyword |
Dakota Keyword Description |
---|---|---|---|
Optional |
Step size used when computing gradients and Hessians |
||
Optional (Choose One) |
Step Scaling |
(Default) Scale step size by the parameter value |
|
Do not scale step-size |
|||
Scale step-size by the domain of the parameter |
|||
Optional (Choose One) |
Finite Difference Type |
(Default) Use forward differences |
|
Use central differences |
Description
The numerical_hessians
specification means that Hessian information
is needed and will be computed with finite differences using either
first-order gradient differencing (for the cases of
analytic_gradients
or for the functions identified by
id_analytic_gradients
in the case of mixed_gradients
) or first- or
second-order function value differencing (all other gradient
specifications). In the former case, the following expression
estimates the \(i^{th}\) Hessian column, and in the latter case, the following expressions
and
provide first- and second-order estimates of the \(ij^{th}\) Hessian term.
Prior to Dakota 5.0, Dakota always used second-order estimates.
In Dakota 5.0 and newer, the default is to use first-order estimates
(which honor bounds on the variables and
require only about a quarter as many function evaluations
as do the second-order estimates), but specifying central
after numerical_hessians
causes Dakota to use the old second-order
estimates, which do not honor bounds. In optimization algorithms that
use Hessians, there is little reason to use second-order differences in
computing Hessian approximations.