numerical_hessians
Hessians are needed and will be approximated by finite differences
Specification
Alias: None
Arguments: None
Child Keywords:
Required/Optional 
Description of Group 
Dakota Keyword 
Dakota Keyword Description 

Optional 
Step size used when computing gradients and Hessians 

Optional (Choose One) 
Step Scaling 
(Default) Scale step size by the parameter value 

Do not scale stepsize 

Scale stepsize by the domain of the parameter 

Optional (Choose One) 
Finite Difference Type 
(Default) Use forward differences 

Use central differences 
Description
The numerical_hessians
specification means that Hessian information
is needed and will be computed with finite differences using either
firstorder gradient differencing (for the cases of
analytic_gradients
or for the functions identified by
id_analytic_gradients
in the case of mixed_gradients
) or first or
secondorder function value differencing (all other gradient
specifications). In the former case, the following expression
estimates the \(i^{th}\) Hessian column, and in the latter case, the following expressions
and
provide first and secondorder estimates of the \(ij^{th}\) Hessian term.
Prior to Dakota 5.0, Dakota always used secondorder estimates.
In Dakota 5.0 and newer, the default is to use firstorder estimates
(which honor bounds on the variables and
require only about a quarter as many function evaluations
as do the secondorder estimates), but specifying central
after numerical_hessians
causes Dakota to use the old secondorder
estimates, which do not honor bounds. In optimization algorithms that
use Hessians, there is little reason to use secondorder differences in
computing Hessian approximations.