localDispatchAndCollect

Description

The localDispatchAndCollect node is used to asynchronously kick off a process separate from Dakota. It is designed to ingest a Dakota parameter file and produce results that will be written to a Dakota results file. Because it takes a single set of Dakota parameters and produces a single set of Dakota results, this node is intended to be run as part of a Dakota analysis driver. That is, a localDispatchAndCollect node is not designed to iterate over a parameter space, but is itself driven by Dakota.

The localDispatchAndCollect node’s name comes from its two-step strategy. It is intended to be run asynchronously with Dakota (in what is sometimes called “offline mode”). After this node spawns a new process (i.e. the dispatch step), the node will exit early, returning fail values to Dakota. Even though they are “fail” values, this immediate return indicates that the spawned process is off running somewhere else. Then, at a later date, the analyst is expected to run Dakota at least one more time. When it’s executed again, the localDispatchAndCollect node is smart enough to pick up the data generated by each job on the remote machine and return it to Dakota (i.e. the collect step), rather than send fresh values for remote job submission.

In short - it is not necessary to leave Dakota up and running while you wait for all your evaluation processes to complete.

Finally, note that this node’s name starts with “local.” This is done to differentiate its behavior from a similar node, the dispatchAndCollect node, which is designed to submit jobs to remote machines, not spawn local processes.

Properties

  • driver: The key text needed to start the inner driver separate from Dakota. This is intended to be a command-line statement - for example, launching a Python script. If you intend to launch another Next-Gen Workflow IWF file as your inner driver, it is recommended that it first passes through a dakotaWorkflowDriver node and then is passed in through the “driver” input port instead.

  • responseFileName: The expected name of the file that response names and values will be written to. As a reminder, this responses file will be filled with “fail values” the first time this node dispatches a process for the evaluation, but responses will be collected to this file once the spawned process completes successfully.

  • failValue: The value used for the localDispatchAndCollect node to recognize that the evaluation failed for whatever reason on the remote machine. “Fail” is used somewhat broadly here, as any unfinished process is also considered a failure. Typically, “NaN” is used to designate a fail value, but you are allowed to change it if “NaN” already has reserved meaning according to your driver. However, you must select some reserved value to indicate failure.

  • expectedEvals: A comma-separated string that allows you to control which evaluations are processed. This feature is particularly useful if you would like to “throttle” Dakota on subsequent runs, and force it only re-evaluate certain evaluations. For example, a string value of “4,10,11” used here would make Dakota re-evaluate evaluations 4, 10, and 11, but all other evaluations previously completed would preserve their existing values.

  • rerunFailedEvaluations: Set this to true to force the evaluation to re-run - even if data already exists in the evaluation directory - but only if the previous evaluation failed.

Input Ports

  • paramsFile: The Dakota parameters file which provides the input to the inner driver that will be spawned as a separate process.

  • driver: The input port version of “driver.” This is intended to be a command-line statement passed in as a string - for example, the command needed to launch a Python script. However, if you intend to launch another Next-Gen Workflow IWF file as your inner driver, it is recommended that it first passes through a dakotaWorkflowDriver node before being passed into this input port.

Output Ports

  • responsesMap: A map of response labels and response values. This map can be passed to a dakotaResultsFile node for further processing.