interface
Specifies how function evaluations will be performed in order to map the variables into the responses.
Topics
block
Specification
Alias: None
Arguments: None
Child Keywords:
Required/Optional |
Description of Group |
Dakota Keyword |
Dakota Keyword Description |
---|---|---|---|
Optional |
Name the interface block; helpful when there are multiple |
||
Optional |
Define how Dakota should run a function evaluation |
||
Optional |
Use AMPL to define algebraic input-output mappings |
||
Optional |
Determine how Dakota responds to analysis driver failure |
||
Optional |
Deactivate Dakota interface features for simplicity or efficiency |
||
Optional |
Specify the number of evaluation servers when Dakota is run in parallel |
||
Optional |
Specify the scheduling of concurrent evaluations when Dakota is run in parallel |
||
Optional |
Specify the number of processors per evaluation server when Dakota is run in parallel |
||
Optional |
Specify the number of analysis servers when Dakota is run in parallel |
||
Optional |
Specify the scheduling of concurrent analyses when Dakota is run in parallel |
Description
The interface section in a Dakota input file specifies how function evaluations will be performed in order to map the variables into the responses. The term “interface” refers to the bridge between Dakota and the underlying simulation code.
In this context, a “function evaluation” is the series of operations that takes the variables and computes the responses. This can be comprised of one or many codes, scripts, and glue, which are generically referred to as “analysis drivers” (and optional input/output filters). The mapping actions of interface-analysis_drivers may be combined with explicit interface-algebraic_mappings
Parallelism Options
The interface-asynchronous keyword enables concurrent local function evaluations or analyses via operating system process management. Its child keywords allow tailoring the evaluation and analysis concurency.
The evaluation servers, scheduling mode (master, peer static or dynamic), and processor keywords allow a user to override Dakota’s default evaluation configuration when running in parallel (MPI) mode.
The analysis servers and scheduling mode (master, peer static or dynamic) keywords allow a user to override Dakota’s default analysis configuration when running in parallel (MPI) mode.
Note: see interface-analysis_drivers-direct for the specific
processors_per_analysis
specification supported for direct
interfaces.
The ParallelLibrary class and the Parallel Computing section provide additional details on parallel configurations.
Theory
Function evaluations are performed using either interfaces to simulation codes, algebraic mappings, or a combination of the two.
When employing mappings with simulation codes, the interface invokes the simulation using either forks, direct function invocations, or computational grid invocations.
In the fork case, Dakota will treat the simulation as a black-box and communication between Dakota and the simulation occurs through parameter and result files. This is the most common case.
In the direct function case, the simulation is internal to Dakota and communication occurs through the function parameter list. The direct case can involve linked simulation codes or test functions which are compiled into the Dakota executable. The test functions allow for rapid testing of algorithms without process creation overhead or engineering simulation expense.
The grid case is deprecated, but was an experiment in interfacing Dakota to distributed computing engines.
When employing algebraic mappings, the AMPL solver library [Gay97] is used to evaluate a directed acyclic graph (DAG) specification from a separate stub.nl file. Separate stub.col and stub.row files are also required to declare the string identifiers of the subset of inputs and outputs, respectively, that will be used in the algebraic mappings.