.. _method-pareto_set: """""""""" pareto_set """""""""" Pareto set optimization .. toctree:: :hidden: :maxdepth: 1 method-pareto_set-method_name method-pareto_set-method_pointer method-pareto_set-random_weight_sets method-pareto_set-weight_sets method-pareto_set-iterator_servers method-pareto_set-iterator_scheduling method-pareto_set-processors_per_iterator **Specification** - *Alias:* None - *Arguments:* None **Child Keywords:** +-------------------------+--------------------+-----------------------------+---------------------------------------------+ | Required/Optional | Description of | Dakota Keyword | Dakota Keyword Description | | | Group | | | +=========================+====================+=============================+=============================================+ | Required (Choose One) | Sub-method | `method_name`__ | Specify sub-method by name | | | Selection +-----------------------------+---------------------------------------------+ | | | `method_pointer`__ | Pointer to optimization or least-squares | | | | | sub-method | +-------------------------+--------------------+-----------------------------+---------------------------------------------+ | Optional | `random_weight_sets`__ | Number of random weighting sets | +----------------------------------------------+-----------------------------+---------------------------------------------+ | Optional | `weight_sets`__ | List of user-specified weighting sets | +----------------------------------------------+-----------------------------+---------------------------------------------+ | Optional | `iterator_servers`__ | Specify the number of iterator servers when | | | | Dakota is run in parallel | +----------------------------------------------+-----------------------------+---------------------------------------------+ | Optional | `iterator_scheduling`__ | Specify the scheduling of concurrent | | | | iterators when Dakota is run in parallel | +----------------------------------------------+-----------------------------+---------------------------------------------+ | Optional | `processors_per_iterator`__ | Specify the number of processors per | | | | iterator server when Dakota is run in | | | | parallel | +----------------------------------------------+-----------------------------+---------------------------------------------+ .. __: method-pareto_set-method_name.html __ method-pareto_set-method_pointer.html __ method-pareto_set-random_weight_sets.html __ method-pareto_set-weight_sets.html __ method-pareto_set-iterator_servers.html __ method-pareto_set-iterator_scheduling.html __ method-pareto_set-processors_per_iterator.html **Description** In the pareto set minimization method ( ``pareto_set``), a series of optimization or least squares calibration runs are performed for different weightings applied to multiple objective functions. This set of optimal solutions defines a "Pareto set," which is useful for investigating design trade-offs between competing objectives. The code is similar enough to the ``multi_start`` technique that both algorithms are implemented in the same ConcurrentMetaIterator class. The ``pareto_set`` specification must identify an optimization or least squares calibration method using either a ``method_pointer`` or a ``method_name`` plus optional ``model_pointer``. This minimizer is responsible for computing a set of optimal solutions from a set of response weightings (multi-objective weights or least squares term weights). These weightings can be specified as follows: (1) using ``random_weight_sets``, in which case weightings are selected randomly within [0,1] bounds, (2) using ``weight_sets``, in which the weighting sets are specified in a list, or (3) using both ``random_weight_sets`` and ``weight_sets``, for which the combined set of weights will be used. In aggregate, at least one set of weights must be specified. The set of optimal solutions is called the "pareto set," which can provide valuable design trade-off information when there are competing objectives. *Expected HDF5 Output* If Dakota was built with HDF5 support and run with the :dakkw:`environment-results_output-hdf5` keyword, this method writes the best parameters and responses returned by each sub-iterator. The weights are provided as metadata. See the :ref:`hdf5_results-ms_pareto` documentation for details.