.. _responses-mixed_gradients-dakota-ignore_bounds:

"""""""""""""
ignore_bounds
"""""""""""""


Do not respect bounds when computing gradients or Hessians


.. toctree::
   :hidden:
   :maxdepth: 1



**Specification**

- *Alias:* None

- *Arguments:* None

- *Default:* bounds respected


**Description**


When Dakota computes gradients or Hessians by finite differences and the
variables in question have bounds, it by default chooses finite-differencing
steps that keep the variables within their specified bounds. Older versions
of Dakota generally ignored bounds when computing finite differences.
To restore the older behavior, one can add keyword <tt>ignore_bounds</tt>
to the <tt>response</tt> specification when <tt>method_source dakota</tt>
(or just <tt>dakota</tt>) is also specified.

In forward difference or backward difference computations, honoring
bounds is straightforward.

To honor bounds when approximating :math:`\partial f / \partial x_i` , i.e., component :math:`i` 
of the gradient of :math:`f` , by central differences, Dakota chooses two steps
:math:`h_1`  and :math:`h_2`  with :math:`h_1 \ne h_2` , such that :math:`x + h_1 e_i` 
and :math:`x + h_2 e_i`  both satisfy the bounds, and then computes

.. math:: 

   \frac{\partial f}{\partial x_i} ong
   \frac{h_2^2(f_1 - f_0) - h_1^2(f_2 - f_0)}{h_1 h_2 (h_2 - h_1)} ,

with :math:`f_0 = f(x)` , :math:`f_1 = f(x + h_1 e_i)` , and
:math:`f_2 = f(x + h_2 e_i)` .