calibration_terms
Response type suitable for calibration or least squares
Specification
Alias: least_squares_terms num_least_squares_terms
Arguments: INTEGER
Child Keywords:
Required/Optional |
Description of Group |
Dakota Keyword |
Dakota Keyword Description |
---|---|---|---|
Optional |
Number of scalar calibration terms |
||
Optional |
Number of field calibration terms |
||
Optional |
Characteristic values to scale each calibration term |
||
Optional |
Specify weights for each objective function |
||
Optional (Choose One) |
Calibration Data |
Supply field or mixed field/scalar calibration data |
|
Supply scalar calibration data only |
|||
Optional |
Variance applied to simulation responses |
||
Optional |
Group to specify nonlinear inequality constraints |
||
Optional |
Group to specify nonlinear equality constraints |
Description
Responses for a calibration study are specified using
calibration_terms
and optional keywords for weighting/scaling, data,
and constraints. In general when calibrating, Dakota automatically
tunes parameters \(\theta\) to minimize discrepancies or residuals
between the model and the data:
Note that the problem specification affects what must be returned to
Dakota in the results_file
:
If calibration data is not specified, then each of the calibration terms returned to Dakota through the
interface
is a residual \(R_{i}\) to be driven toward zero.If calibration data is specified, then each of the calibration terms returned to Dakota must be a response :math:` y^{Model}_i(theta)` , which Dakota will difference with the data in the specified data file.
Constraints
(See general problem formulation at
objective_functions
.) The keywords
nonlinear_inequality_constraints
and
nonlinear_equality_constraints
specify the
number of nonlinear inequality constraints em g, and nonlinear
equality constraints em h, respectively. When interfacing to
external applications, the responses must be returned to Dakota in
this order in the results_file
:
0. calibration terms nonlinear inequality
constraints nonlinear equality constraints
An optimization problem’s linear constraints are provided to the
solver at startup only and do not need to be included in the data
returned on every function evaluation. Linear constraints are
therefore specified in the variables
block through the
linear_inequality_constraint_matrix
\(A_i\) and
linear_equality_constraint_matrix
\(A_e\) .
Lower and upper bounds on the design variables em x are also
specified in the variables
block.
Problem Transformations
Weighting or scaling calibration terms is often appropriate to account for measurement error or to condition the problem for easier solution. Weighting or scaling transformations are applied in the following order:
1. When present, observation error variance \(\sigma_i\) or full covariance \(\Sigma\) , optionally specified through experiment_variance_type
, is applied to residuals first:
.. math:: R^{(1)}_i = frac{R_{i}}{sigma_{i}} = frac{y^{Model}_i(theta) - y^{Data}_{i}}{sigma_{i}} textrm{, or}
.. math:: R^{(1)} = Sigma^{-1/2} R = Sigma^{-1/2} left(y^{Model}(theta) - y^{Data}right),
resulting in the typical variance-weighted least squares formulation
.. math:: textrm{min}_theta ; R(theta)^T Sigma^{-1} R(theta)
2. Any active scaling transformations are applied next, e.g., for characteristic value scaling:
.. math:: R^{(2)}_i = frac{R^{(1)}_i}{s_i}
3. Finally the optional weights are applied in a way that preserves backward compatibility:
.. math:: R^{(3)}_i = sqrt{w_i}{R^{(2)}_i}
so the ultimate least squares formulation, e.g., in a scaled and weighted case would be
.. math:: f = sum_{i=1}^{n} w_i left( frac{y^{Model}_i - y^{Data}_i}{s_i} right)^2
Note that specifying observation error variance and weights are mutually
exclusive in a calibration problem.
Theory
Dakota calibration terms are typically used to solve problems of
parameter estimation, system identification, and model
calibration/inversion. Local least squares calibration problems are
most efficiently solved using special-purpose least squares solvers
such as Gauss-Newton or Levenberg-Marquardt; however, they may also be
solved using any general-purpose optimization algorithm in Dakota.
While Dakota can solve these problems with either least squares or
optimization algorithms, the response data sets to be returned from
the simulator are different when using
objective_functions
versus calibration_terms
.
Least squares calibration involves a set of residual
functions, whereas optimization involves a single objective function
(sum of the squares of the residuals), i.e.,
.. math:: f = sum_{i=1}^{n}
R_i^2 = sum_{i=1}^{n} left(y^{Model}_i(theta) - y^{Data}_{i} right)^2
where f is the objective function and the set of \(R_i\)
are the residual functions, most commonly defined as the difference between a model response and data. Therefore, function values and derivative
data in the least squares case involve the values and derivatives of
the residual functions, whereas the optimization case involves values
and derivatives of the sum of squares objective function. This means that
in the least squares calibration case, the user must return each of
n
residuals
separately as a separate calibration term. Switching
between the two approaches sometimes requires different simulation
interfaces capable of returning the different granularity of response
data required, although Dakota supports automatic recasting of
residuals into a sum of squares for presentation to an optimization
method. Typically, the user must compute the difference between the
model results and the observations when computing the residuals.
However, the user has the option of specifying the observational data
(e.g. from physical experiments or other sources) in a file.