moga

Multi-objective Genetic Algorithm (a.k.a Evolutionary Algorithm)

Topics

package_jega

Specification

  • Alias: None

  • Arguments: None

Child Keywords:

Required/Optional

Description of Group

Dakota Keyword

Dakota Keyword Description

Optional

fitness_type

Select the fitness type for JEGA methods

Optional

replacement_type

Select a replacement type for JEGA methods

Optional

niching_type

Specify the type of niching pressure

Optional

convergence_type

Select the convergence type for JEGA methods

Optional

postprocessor_type

Post process the final solution from moga

Optional

max_iterations

Number of iterations allowed for optimizers and adaptive UQ methods

Optional

max_function_evaluations

Number of function evaluations allowed for optimizers

Optional

scaling

Turn on scaling for variables, responses, and constraints

Optional

population_size

Set the initial population size in JEGA methods

Optional

log_file

Specify the name of a log file

Optional

print_each_pop

Print every population to a population file

Optional

initialization_type

Specify how to initialize the population

Optional

crossover_type

Select a crossover type for JEGA methods

Optional

mutation_type

Select a mutation type for JEGA methods

Optional

seed

Seed of the random number generator

Optional

convergence_tolerance

Stopping criterion based on objective function or statistics convergence

Optional

model_pointer

Identifier for model block to be used by a method

Description

moga stands for Multi-objective Genetic Algorithm, which is a global optimization method that does Pareto optimization for multiple objectives. It supports general constraints and a mixture of real and discrete variables.

Constraints

moga can utilize linear constraints using the keywords:

Configuration

The genetic algorithm configurations are:

  • fitness

  • replacement

  • niching

  • convergence

  • postprocessor

  • initialization

  • crossover

  • mutation

  • population size

The steps followed by the algorithm are listed below. The configurations will effect how the algorithm completes each step.

Stopping Criteria

The moga method respects the max_iterations and max_function_evaluations method independent controls to provide integer limits for the maximum number of generations and function evaluations, respectively.

The algorithm also stops when convergence is reached. This involves repeated assessment of the algorithm’s progress in solving the problem, until some criterion is met.

The specification for convergence in a moga can either be metric_tracker or can be omitted all together. If omitted, no convergence algorithm will be used and the algorithm will rely on stopping criteria only.

Expected Outputs

The moga method respects the output method independent control to vary the amount of information presented to the user during execution.

The final results are written to the Dakota tabular output. Additional information is also available - see the log_file and print_each_pop keywords.

Note that moga and SOGA create additional output files during execution. “finaldata.dat” is a file that holds the final set of Pareto optimal solutions after any post-processing is complete. “discards.dat” holds solutions that were discarded from the population during the course of evolution.

It can often be useful to plot objective function values from these files to visually see the Pareto front and ensure that finaldata.dat solutions dominate discards.dat solutions. The solutions are written to these output files in the format “Input1…InputN..Output1…OutputM”.

Expected HDF5 Output

If Dakota was built with HDF5 support and run with the hdf5 keyword, this method writes the following results to HDF5:

Important Notes

The pool of potential members is the current population and the current set of offspring.

Choice of fitness assessors is strongly related to the type of replacement algorithm being used and can have a profound effect on the solutions selected for the next generation.

If using the fitness types layer_rank or domination_count, it is strongly recommended that you use the replacement_type below_limit (although the roulette wheel selectors can also be used).

The functionality of the domination_count selector of JEGA v1.0 can now be achieved using the domination_count fitness type and below_limit replacement type.

Theory

The basic steps of the moga algorithm are as follows:

  1. Initialize the population

  2. Evaluate the population (calculate the values of the objective function and constraints for each population member)

  3. Loop until converged, or stopping criteria reached

  1. Perform crossover

  2. Perform mutation

  3. Evaluate the new population

  4. Assess the fitness of each member in the population

  5. Replace the population with members selected to continue in the next generation

  6. Apply niche pressure to the population

  7. Test for convergence

  1. Perform post processing

If moga is used in a hybrid optimization method (which requires one optimal solution from each individual optimization method to be passed to the subsequent optimization method as its starting point), the solution in the Pareto set closest to the “utopia” point is given as the best solution. This solution is also reported in the Dakota output.

This “best” solution in the Pareto set has minimum distance from the utopia point. The utopia point is defined as the point of extreme (best) values for each objective function. For example, if the Pareto front is bounded by (1,100) and (90,2), then (1,2) is the utopia point. There will be a point in the Pareto set that has minimum L2-norm distance to this point, for example (10,10) may be such a point.

If moga is used in a method which may require passing multiple solutions to the next level (such as the surrogate_based_global method or hybrid methods), the orthogonal_distance postprocessor type may be used to specify the distances between each solution value to winnow down the solutions in the full Pareto front to a subset which will be passed to the next iteration.