.. _method-coliny_pattern_search: """"""""""""""""""""" coliny_pattern_search """"""""""""""""""""" Pattern search, derivative free optimization method **Topics** package_scolib, package_coliny, global_optimization_methods .. toctree:: :hidden: :maxdepth: 1 method-coliny_pattern_search-constant_penalty method-coliny_pattern_search-no_expansion method-coliny_pattern_search-expand_after_success method-coliny_pattern_search-pattern_basis method-coliny_pattern_search-stochastic method-coliny_pattern_search-total_pattern_size method-coliny_pattern_search-exploratory_moves method-coliny_pattern_search-synchronization method-coliny_pattern_search-contraction_factor method-coliny_pattern_search-constraint_penalty method-coliny_pattern_search-initial_delta method-coliny_pattern_search-variable_tolerance method-coliny_pattern_search-solution_target method-coliny_pattern_search-seed method-coliny_pattern_search-show_misc_options method-coliny_pattern_search-misc_options method-coliny_pattern_search-max_iterations method-coliny_pattern_search-convergence_tolerance method-coliny_pattern_search-max_function_evaluations method-coliny_pattern_search-scaling method-coliny_pattern_search-model_pointer **Specification** - *Alias:* None - *Arguments:* None **Child Keywords:** +-------------------------+--------------------+------------------------------+---------------------------------------------+ | Required/Optional | Description of | Dakota Keyword | Dakota Keyword Description | | | Group | | | +=========================+====================+==============================+=============================================+ | Optional | `constant_penalty`__ | Use a simple weighted penalty to manage | | | | feasibility | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `no_expansion`__ | Don't allow expansion of the search pattern | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `expand_after_success`__ | Set the factor by which a search pattern | | | | can be expanded | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `pattern_basis`__ | Pattern basis selection | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `stochastic`__ | Generate trial points in random order | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `total_pattern_size`__ | Total number of points in search pattern | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `exploratory_moves`__ | Exploratory moves selection | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `synchronization`__ | Select how Dakota schedules a batch of | | | | concurrent function evaluations in a | | | | parallel algorithm | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `contraction_factor`__ | Amount by which step length is rescaled | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `constraint_penalty`__ | Multiplier for the penalty function | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `initial_delta`__ | Initial step size for derivative-free | | | | optimizers | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `variable_tolerance`__ | Step length-based stopping criteria for | | | | derivative-free optimizers | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `solution_target`__ | Stopping criteria based on objective | | | | function value | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `seed`__ | Seed of the random number generator | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `show_misc_options`__ | Show algorithm parameters not exposed in | | | | Dakota input | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `misc_options`__ | Set method options not available through | | | | Dakota spec | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `max_iterations`__ | Number of iterations allowed for optimizers | | | | and adaptive UQ methods | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `convergence_tolerance`__ | Stopping criterion based on objective | | | | function or statistics convergence | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `max_function_evaluations`__ | Number of function evaluations allowed for | | | | optimizers | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `scaling`__ | Turn on scaling for variables, responses, | | | | and constraints | +----------------------------------------------+------------------------------+---------------------------------------------+ | Optional | `model_pointer`__ | Identifier for model block to be used by a | | | | method | +----------------------------------------------+------------------------------+---------------------------------------------+ .. __: method-coliny_pattern_search-constant_penalty.html __ method-coliny_pattern_search-no_expansion.html __ method-coliny_pattern_search-expand_after_success.html __ method-coliny_pattern_search-pattern_basis.html __ method-coliny_pattern_search-stochastic.html __ method-coliny_pattern_search-total_pattern_size.html __ method-coliny_pattern_search-exploratory_moves.html __ method-coliny_pattern_search-synchronization.html __ method-coliny_pattern_search-contraction_factor.html __ method-coliny_pattern_search-constraint_penalty.html __ method-coliny_pattern_search-initial_delta.html __ method-coliny_pattern_search-variable_tolerance.html __ method-coliny_pattern_search-solution_target.html __ method-coliny_pattern_search-seed.html __ method-coliny_pattern_search-show_misc_options.html __ method-coliny_pattern_search-misc_options.html __ method-coliny_pattern_search-max_iterations.html __ method-coliny_pattern_search-convergence_tolerance.html __ method-coliny_pattern_search-max_function_evaluations.html __ method-coliny_pattern_search-scaling.html __ method-coliny_pattern_search-model_pointer.html **Description** Pattern search techniques are nongradient-based optimization methods which use a set of offsets from the current iterate to locate improved points in the design space. *See the page :ref:`topic-package_scolib` for important information regarding all SCOLIB methods* ``coliny_pattern_search`` supports concurrency up to the size of the search pattern Traditional pattern search methods search with a fixed pattern of search directions to try to find improvements to the current iterate. The SCOLIB pattern search methods generalize this simple algorithmic strategy to enable control of how the search pattern is adapted, as well as how each search pattern is evaluated. The ``stochastic`` and ``synchronization`` specifications denote how the the trial points are evaluated. The ``stochastic`` specification indicates that the trial points are considered in a random order. For parallel pattern search, ``synchronization`` dictates whether the evaluations are scheduled using a ``blocking`` scheduler or a ``nonblocking`` scheduler. In the ``blocking`` case, all points in the pattern are evaluated (in parallel), and if the best of these trial points is an improving point, then it becomes the next iterate. These runs are reproducible, assuming use of the same seed in the ``stochastic`` case. In the ``nonblocking`` case, all points in the pattern may not be evaluated, since the first improving point found becomes the next iterate. Since the algorithm steps will be subject to parallel timing variabilities, these runs will not generally be repeatable. The ``synchronization`` specification has similar connotations for sequential pattern search. If ``blocking`` is specified, then each sequential iteration terminates after all trial points have been considered, and if ``nonblocking`` is specified, then each sequential iteration terminates after the first improving trial point is evaluated. In this release, both ``blocking`` and ``nonblocking`` specifications result in blocking behavior (except in the case where ``exporatory_moves`` below is set to ``adaptive_pattern``). Nonblocking behavior will be re-enabled after some underlying technical issues have been resolved. The particular form of the search pattern is controlled by the ``pattern_basis`` specification. If ``pattern_basis`` is ``coordinate`` basis, then the pattern search uses a plus and minus offset in each coordinate direction, for a total of *2n* function evaluations in the pattern. This case is depicted in Figure 5.3 for three coordinate dimensions. \image html pattern_search.jpg "Figure 5.3 Depiction of coordinate pattern search algorithm" \image latex pattern_search.eps "Depiction of coordinate pattern search algorithm" width=10cm If ``pattern_basis`` is ``simplex``, then pattern search uses a minimal positive basis simplex for the parameter space, for a total of *n+1* function evaluations in the pattern. Note that the ``simplex`` pattern basis can be used for unbounded problems only. The ``total_pattern_size`` specification can be used to augment the basic ``coordinate`` and ``simplex`` patterns with additional function evaluations, and is particularly useful for parallel load balancing. For example, if some function evaluations in the pattern are dropped due to duplication or bound constraint interaction, then the ``total_pattern_size`` specification instructs the algorithm to generate new offsets to bring the total number of evaluations up to this consistent total. The ``exploratory_moves`` specification controls how the search pattern is adapted. (The search pattern can be adapted after an improving trial point is found, or after all trial points in a search pattern have been found to be unimproving points.) The following exploratory moves selections are supported by SCOLIB: - The ``basic_pattern`` case is the simple pattern search approach, which uses the same pattern in each iteration. - The ``multi_step`` case examines each trial step in the pattern in turn. If a successful step is found, the pattern search continues examining trial steps about this new point. In this manner, the effects of multiple successful steps are cumulative within a single iteration. This option does not support any parallelism and will result in a serial pattern search. - The ``adaptive_pattern`` case invokes a pattern search technique that adaptively rescales the different search directions to maximize the number of redundant function evaluations. See :cite:p:`Hart2001c` for details of this method. In preliminary experiments, this method had more robust performance than the standard ``basic_pattern`` case in serial tests. This option supports a limited degree of parallelism. After successful iterations (where the step length is not contracted), a parallel search will be performed. After unsuccessful iterations (where the step length is contracted), only a single evaluation is performed. The ``initial_delta`` and ``variable_tolerance`` specifications provide the initial offset size and the threshold size at which to terminate the algorithm. For any dimension that has both upper and lower bounds, this step length will be internally rescaled to provide search steps of length ``initial_delta`` *range* 0.1. This rescaling does not occur for other dimensions, so search steps in those directions have length ``initial_delta``. Note that the factor of 0.1 in the rescaling could result in an undesirably small initial step. This can be offset by providing a large ``initial_delta``. In general, pattern search methods can expand and contract their step lengths. SCOLIB pattern search methods contract the step length by the value ``contraction_factor``, and they expand the step length by the value (1/contraction_factor). The ``expand_after_success`` control specifies how many successful objective function improvements must occur with a specific step length prior to expansion of the step length, whereas the ``no_expansion`` flag instructs the algorithm to forgo pattern expansion altogether. Finally, constraint infeasibility can be managed in a somewhat more sophisticated manner than the simple weighted penalty function. If the ``constant_penalty`` specification is used, then the simple weighted penalty scheme described above is used. Otherwise, the constraint penalty is adapted to the value ``constraint_penalty/L``, where L is the the smallest step length used so far. *Expected HDF5 Output* If Dakota was built with HDF5 support and run with the :dakkw:`environment-results_output-hdf5` keyword, this method writes the following results to HDF5: - :ref:`hdf5_results-best_params` - :ref:`hdf5_results-best_obj_fncs` (when :dakkw:`responses-objective_functions`) are specified) - :ref:`hdf5_results-best_constraints` - :ref:`hdf5_results-calibration` (when :dakkw:`responses-calibration_terms` are specified)