numerical_hessians

Hessians are needed and will be approximated by finite differences

Specification

  • Alias: None

  • Arguments: None

Child Keywords:

Required/Optional

Description of Group

Dakota Keyword

Dakota Keyword Description

Optional

fd_step_size

Step size used when computing gradients and Hessians

Optional (Choose One)

Step Scaling

relative

(Default) Scale step size by the parameter value

absolute

Do not scale step-size

bounds

Scale step-size by the domain of the parameter

Optional (Choose One)

Finite Difference Type

forward

(Default) Use forward differences

central

Use central differences

Description

The numerical_hessians specification means that Hessian information is needed and will be computed with finite differences using either first-order gradient differencing (for the cases of analytic_gradients or for the functions identified by id_analytic_gradients in the case of mixed_gradients) or first- or second-order function value differencing (all other gradient specifications). In the former case, the following expression

2f(x)iongf(x+hei)f(x)h

estimates the ith Hessian column, and in the latter case, the following expressions

2f(x)i,jongf(x+hiei+hjej)f(x+hiei)f(xhjej)+f(x)hihj

and

2f(x)i,jongf(x+hei+hej)f(x+heihej)f(xhei+hej)+f(xheihej)4h2

provide first- and second-order estimates of the ijth Hessian term. Prior to Dakota 5.0, Dakota always used second-order estimates. In Dakota 5.0 and newer, the default is to use first-order estimates (which honor bounds on the variables and require only about a quarter as many function evaluations as do the second-order estimates), but specifying central after numerical_hessians causes Dakota to use the old second-order estimates, which do not honor bounds. In optimization algorithms that use Hessians, there is little reason to use second-order differences in computing Hessian approximations.