Numerical Nonlinear Global Optimization

Introduction

Numerical algorithms for constrained nonlinear optimization can be broadly categorized into gradient-based methods and direct search methods. Gradient-based methods use first derivatives (gradients) or second derivatives (Hessians). Examples are the sequential quadratic programming (SQP) method, the augmented Lagrangian method, and the (nonlinear) interior point method. Direct search methods do not use derivative information. Examples are NelderMead, genetic algorithm and differential evolution, and simulated annealing. Direct search methods tend to converge more slowly, but can be more tolerant to the presence of noise in the function and constraints.

Typically, algorithms only build up a local model of the problems. Furthermore, many such algorithms insist on certain decrease of the objective function, or decrease of a merit function that is a combination of the objective and constraints, to ensure convergence of the iterative process. Such algorithms will, if convergent, only find local optima, and are called local optimization algorithms. In the Wolfram Language local optimization problems can be solved using FindMinimum.

Global optimization algorithms, on the other hand, attempt to find the global optimum, typically by allowing decrease as well as increase of the objective/merit function. Such algorithms are usually computationally more expensive. Global optimization problems can be solved exactly using Minimize or numerically using NMinimize.

This solves a nonlinear programming problem,

using Minimize, which gives an exact solution

This solves the same problem numerically. NMinimize returns a machine-number solution:
FindMinimum numerically finds a local minimum. In this example the local minimum found is also a global minimum:

The NMinimize Function

NMinimize and NMaximize implement several algorithms for finding constrained global optima. The methods are flexible enough to cope with functions that are not differentiable or continuous and are not easily trapped by local optima.

Finding a global optimum can be arbitrarily difficult, even without constraints, and so the methods used may fail. It may frequently be useful to optimize the function several times with different starting conditions and take the best of the results.

This finds the maximum of :
This finds the minimum of subject to the constraints and :

The constraints to NMinimize and NMaximize may be either a list or a logical combination of equalities, inequalities, and domain specifications. Equalities and inequalities may be nonlinear. Specify a domain for a variable using Element, for example, Element[x,Integers] or xIntegers. Variables must be either integers or real numbers, and will be assumed to be real numbers unless specified otherwise. Constraints are generally enforced by adding penalties when points leave the feasible region.

Constraints can contain logical operators like And, Or, and so on:
Here is restricted to being an integer:

In order for NMinimize to work, it needs a rectangular initial region in which to start. This is similar to giving other numerical methods a starting point or starting points. The initial region is specified by giving each variable a finite upper and lower bound. This is done by including in the constraints, or {x,a,b} in the variables. If both are given, the bounds in the variables are used for the initial region, and the constraints are just used as constraints. If no initial region is specified for a variable x, the default initial region of is used. Different variables can have initial regions defined in different ways.

Here the initial region is taken from the variables. The problem is unconstrained:
Here the initial region is taken from the constraints:
Here the initial region for is taken from the constraints, the initial region for is taken from the variables, and the initial region for is taken to be the default. The problem is unconstrained in and , but not :
The polynomial has global minima at . NelderMead finds one of the minima:
The other minimum can be found by using a different RandomSeed:

NMinimize and NMaximize have several optimization methods available: Automatic, "DifferentialEvolution", "NelderMead", "RandomSearch", and "SimulatedAnnealing". The optimization method is controlled by the Method option, which either takes the method as a string, or takes a list whose first element is the method as a string and whose remaining elements are methodspecific options. All methodspecific option, lefthand sides should also be given as strings.

The following function has a large number of local minima:
Use RandomSearch to find a minimum:
Use RandomSearch with more starting points to find the global minimum:

With the default method, NMinimize picks which method to use based on the type of problem. If the objective function and constraints are linear, LinearOptimization is used. If there are integer variables, or if the head of the objective function is not a numeric function, differential evolution is used. For everything else, it uses NelderMead, but if NelderMead does poorly, it switches to differential evolution.

Because the methods used by NMinimize may not improve every iteration, convergence is only checked after several iterations have occurred.

Numerical Algorithms for Constrained Global Optimization

NelderMead

The NelderMead method is a direct search method. For a function of variables, the algorithm maintains a set of points forming the vertices of a polytope in -dimensional space. This method is often termed the "simplex" method, which should not be confused with the well-known simplex method for linear programming.

At each iteration, points form a polytope. The points are ordered so that A new point is then generated to replace the worst point

Let be the centroid of the polytope consisting of the best points, . A trial point is generated by reflecting the worst point through the centroid, , where is a parameter.

If the new point is neither a new worst point nor a new best point, , replaces .

If the new point is better than the best point, , the reflection is very successful and can be carried out further to , where is a parameter to expand the polytope. If the expansion is successful, , replaces ; otherwise the expansion failed, and replaces .

If the new point is worse than the second worst point, , the polytope is assumed to be too large and needs to be contracted. A new trial point is defined as

where is a parameter. If , the contraction is successful, and replaces . Otherwise a further contraction is carried out.

The process is assumed to have converged if the difference between the best function values in the new and old polytope, as well as the distance between the new best point and the old best point, are less than the tolerances provided by AccuracyGoal and PrecisionGoal.

Strictly speaking, NelderMead is not a true global optimization algorithm; however, in practice it tends to work reasonably well for problems that do not have many local minima.

option name
default value
"ContractRatio"0.5ratio used for contraction
"ExpandRatio"2.0ratio used for expansion
"InitialPoints"Automaticset of initial points
"PenaltyFunction"Automaticfunction applied to constraints to penalize invalid points
"PostProcess"Automaticwhether to post-process using local search methods
"RandomSeed"0starting value for the random number generator
"ReflectRatio"1.0ratio used for reflection
"ShrinkRatio"0.5ratio used for shrinking
"Tolerance"0.001tolerance for accepting constraint violations

NelderMead specific options.

Here the function inside the unit disk is minimized using NelderMead:
Here is a function with several local minima that are all different depths:
With the default parameters, NelderMead is too easily trapped in a local minimum:
By using settings that are more aggressive and less likely to make the simplex smaller, the results are better:

Differential Evolution

Differential evolution is a simple stochastic function minimizer.

The algorithm maintains a population of points, , where typically , with being the number of variables.

During each iteration of the algorithm, a new population of points is generated. The th new point is generated by picking three random points, , , and , from the old population, and forming , where is a real scaling factor. Then a new point is constructed from and by taking the th coordinate from with probability and otherwise taking the coordinate from . If , then replaces in the population. The probability is controlled by the "CrossProbability" option.

The process is assumed to have converged if the difference between the best function values in the new and old populations, as well as the distance between the new best point and the old best point, are less than the tolerances provided by AccuracyGoal and PrecisionGoal.

The differential evolution method is computationally expensive, but is relatively robust and tends to work well for problems that have more local minima.

option name
default value
"CrossProbability"0.5probability that a gene is taken from xi
"InitialPoints"Automaticset of initial points
"PenaltyFunction"Automaticfunction applied to constraints to penalize invalid points
"PostProcess"Automaticwhether to post-process using local search methods
"RandomSeed"0starting value for the random number generator
"ScalingFactor"0.6scale applied to the difference vector in creating a mate
"SearchPoints"Automaticsize of the population used for evolution
"Tolerance"0.001tolerance for accepting constraint violations

DifferentialEvolution specific options.

Here the function inside the unit disk is minimized using DifferentialEvolution:
The following constrained optimization problem has a global minimum of 7.667180068813135`:
With the default settings for DifferentialEvolution, an unsatisfactory solution results:
By adjusting ScalingFactor, the results are much better. In this case, the increased ScalingFactor gives DifferentialEvolution better mobility with respect to the integer variables:

Simulated Annealing

Simulated annealing is a simple stochastic function minimizer. It is motivated from the physical process of annealing, where a metal object is heated to a high temperature and allowed to cool slowly. The process allows the atomic structure of the metal to settle to a lower energy state, thus becoming a tougher metal. Using optimization terminology, annealing allows the structure to escape from a local minimum, and to explore and settle on a better, hopefully global, minimum.

At each iteration, a new point, , is generated in the neighborhood of the current point, . The radius of the neighborhood decreases with each iteration. The best point found so far, , is also tracked.

If , replaces and . Otherwise, replaces with a probability . Here is the function defined by BoltzmannExponent, is the current iteration, is the change in the objective function value, and is the value of the objective function from the previous iteration. The default function for is .

Like the RandomSearch method, SimulatedAnnealing uses multiple starting points, and finds an optimum starting from each of them.

The default number of starting points, given by the option SearchPoints, is , where is the number of variables.

For each starting point, this is repeated until the maximum number of iterations is reached, the method converges to a point, or the method stays at the same point consecutively for the number of iterations given by LevelIterations.

option name
default value
"BoltzmannExponent"Automaticexponent of the probability function
"InitialPoints"Automaticset of initial points
"LevelIterations"50maximum number of iterations to stay at a given point
"PenaltyFunction"Automaticfunction applied to constraints to penalize invalid points
"PerturbationScale"1.0scale for the random jump
"PostProcess"Automaticwhether to post-process using local search methods
"RandomSeed"0starting value for the random number generator
"SearchPoints"Automaticnumber of initial points
"Tolerance"0.001tolerance for accepting constraint violations

SimulatedAnnealing specific options.

Here a function in two variables is minimized using SimulatedAnnealing:
Here is a function with many local minima:
By default, the step size for SimulatedAnnealing is not large enough to escape from the local minima:
By increasing PerturbationScale, larger step sizes are taken to produce a much better solution:
Here BoltzmannExponent is set to use an exponential cooling function that gives faster convergence. (Note that the modified PerturbationScale is still being used as well.)

Random Search

The random search algorithm works by generating a population of random starting points and uses a local optimization method from each of the starting points to converge to a local minimum. The best local minimum is chosen to be the solution.

The possible local search methods are Automatic and "InteriorPoint". The default method is Automatic, which uses FindMinimum with unconstrained methods applied to a system with penalty terms added for the constraints. When Method is set to "InteriorPoint", a nonlinear interior-point method is used.

The default number of starting points, given by the option SearchPoints, is , where is the number of variables.

Convergence for RandomSearch is determined by convergence of the local method for each starting point.

RandomSearch is fast, but does not scale very well with the dimension of the search space. It also suffers from many of the same limitations as FindMinimum. It is not well suited for discrete problems and others where derivatives or secants give little useful information about the problem.

option name
default value
"InitialPoints"Automaticset of initial points
"Method"Automaticwhich method to use for minimization
"PenaltyFunction"Automaticfunction applied to constraints to penalize invalid points
"PostProcess"Automaticwhether to post-process using local search methods
"RandomSeed"0starting value for the random number generator
"SearchPoints"Automaticnumber of points to use for starting local searches
"Tolerance"0.001tolerance for accepting constraint violations

RandomSearch specific options.

Here the function inside the unit disk is minimized using RandomSearch:
Here is a function with several local minima that are all different depths and are generally difficult to optimize:
With the default number of SearchPoints, sometimes the minimum is not found:
Using many more SearchPoints produces better answers:
Here points are generated on a grid for use as initial points:
This uses nonlinear interior point methods to find the minimum of a sum of squares:
For some classes of problems, limiting the number of SearchPoints can be much faster without affecting the quality of the solution: