---
title: "NMaxValue"
language: "en"
type: "Symbol"
summary: "NMaxValue[f, x] gives the global maximum value of f with respect to x. NMaxValue[f, {x, y, ...}] gives the global maximum value of f with respect to x, y, .... NMaxValue[{f, cons}, {x, y, ...}] gives the global maximum value of f subject to the constraints cons. NMaxValue[..., x \\[Element] reg] constrains x to be in the region reg."
keywords: 
- constrained optimization
- cost function
- differential evolution
- extremization
- flexible polyhedron method
- global minimization
- goal functions
- integer programming
- linear programming
- minimization
- Nelder-Mead
- numerical minimization
- objective functions
- operations research
- optimization
- pay-off functions
- random search
- simulated annealing
- argmin
- minpos
- NelderMead
- DifferentialEvolution
- SimulatedAnnealing
- RandomSearch
- AMOEBA
- CONSTRAINED_MIN
- DFPMIN
- POWELL
- minimize
canonical_url: "https://reference.wolfram.com/language/ref/NMaxValue.html"
source: "Wolfram Language Documentation"
related_guides: 
  - 
    title: "Optimization"
    link: "https://reference.wolfram.com/language/guide/Optimization.en.md"
  - 
    title: "Symbolic Vectors, Matrices and Arrays"
    link: "https://reference.wolfram.com/language/guide/SymbolicArrays.en.md"
  - 
    title: "Convex Optimization"
    link: "https://reference.wolfram.com/language/guide/ConvexOptimization.en.md"
related_functions: 
  - 
    title: "NMinimize"
    link: "https://reference.wolfram.com/language/ref/NMinimize.en.md"
  - 
    title: "Maximize"
    link: "https://reference.wolfram.com/language/ref/Maximize.en.md"
  - 
    title: "FindMinimum"
    link: "https://reference.wolfram.com/language/ref/FindMinimum.en.md"
  - 
    title: "NArgMin"
    link: "https://reference.wolfram.com/language/ref/NArgMin.en.md"
  - 
    title: "NMinValue"
    link: "https://reference.wolfram.com/language/ref/NMinValue.en.md"
  - 
    title: "FindFit"
    link: "https://reference.wolfram.com/language/ref/FindFit.en.md"
  - 
    title: "LinearOptimization"
    link: "https://reference.wolfram.com/language/ref/LinearOptimization.en.md"
  - 
    title: "ConvexOptimization"
    link: "https://reference.wolfram.com/language/ref/ConvexOptimization.en.md"
  - 
    title: "GeometricOptimization"
    link: "https://reference.wolfram.com/language/ref/GeometricOptimization.en.md"
  - 
    title: "LeastSquares"
    link: "https://reference.wolfram.com/language/ref/LeastSquares.en.md"
  - 
    title: "BayesianMinimization"
    link: "https://reference.wolfram.com/language/ref/BayesianMinimization.en.md"
  - 
    title: "RegionDistance"
    link: "https://reference.wolfram.com/language/ref/RegionDistance.en.md"
  - 
    title: "SpherePoints"
    link: "https://reference.wolfram.com/language/ref/SpherePoints.en.md"
  - 
    title: "NetTrain"
    link: "https://reference.wolfram.com/language/ref/NetTrain.en.md"
related_tutorials: 
  - 
    title: "Numerical Mathematics: Basic Operations"
    link: "https://reference.wolfram.com/language/tutorial/NumericalCalculations.en.md"
  - 
    title: "Numerical Optimization"
    link: "https://reference.wolfram.com/language/tutorial/NumericalOperationsOnFunctions.en.md#24524"
  - 
    title: "Numerical Nonlinear Global Optimization"
    link: "https://reference.wolfram.com/language/tutorial/ConstrainedOptimizationGlobalNumerical.en.md"
  - 
    title: "Constrained Optimization"
    link: "https://reference.wolfram.com/language/tutorial/ConstrainedOptimizationOverview.en.md"
  - 
    title: "Unconstrained Optimization"
    link: "https://reference.wolfram.com/language/tutorial/UnconstrainedOptimizationOverview.en.md"
  - 
    title: "Implementation notes: Numerical and Related Functions"
    link: "https://reference.wolfram.com/language/tutorial/SomeNotesOnInternalImplementation.en.md#10453"
---
# NMaxValue

NMaxValue[f, x] gives the global maximum value of f with respect to x.

NMaxValue[f, {x, y, …}] gives the global maximum value of f with respect to x, y, …. 

NMaxValue[{f, cons}, {x, y, …}] gives the global maximum value of f subject to the constraints cons. 

NMaxValue[…, x∈reg] constrains x to be in the region reg.

## Details and Options

* ``NMaxValue`` is also known as global optimization (GO).

* ``NMaxValue`` always attempts to find a global maximum of ``f`` subject to the constraints given.

* ``NMaxValue`` is typically used to find the largest possible values given constraints. In different areas, this may be called the best strategy, best fit, best configuration and so on.

[image]

* If ``f`` is linear or concave and ``cons`` are linear or convex, the result given by ``NMaxValue`` will be the global maximum, over both real and integer values; otherwise, the result may sometimes only be a local maximum.

* If ``NMaxValue`` determines that the constraints cannot be satisfied, it returns ``-Infinity``.

* ``NMaxValue`` supports a modeling language where the objective function ``f`` and constraints ``cons`` are given in terms of expressions depending on scalar or vector variables. ``f`` and ``cons`` are typically parsed into very efficient forms, but as long as ``f`` and the terms in ``cons`` give numerical values for numerical values of the variables, ``NMaxValue`` can often find a solution.

* The constraints ``cons`` can be any logical combination of:

|                                            |                                          |
| :----------------------------------------- | :--------------------------------------- |
| lhs == rhs                                 | equations                                |
| lhs > rhs, lhs ≥ rhs, lhs < rhs, lhs ≤ rhs | inequalities (LessEqual, …)              |
| lhs\[VectorGreater]rhs, lhs\[VectorGreaterEqual]rhs, lhs\[VectorLess]rhs, lhs\[VectorLessEqual]rhs         | vector inequalities (VectorLessEqual, …) |
| {x, y, …}∈rdom                             | region or domain specification           |

* ``NMaxValue[{f, cons}, x∈rdom]`` is effectively equivalent to ``NMaxValue[{f, cons && x∈rdom}, x]``.

* For ``x∈rdom``, the different coordinates can be referred to using ``Indexed[x, i]``.

* Possible domains ``rdom`` include:

|                       |                                                                                                |
| --------------------- | ---------------------------------------------------------------------------------------------- |
| Reals                 | real scalar variable                                                                           |
| Integers              | integer scalar variable                                                                        |
| Vectors[n, dom]       | vector variable in $\mathbb{R}^n$                                |
| Matrices[{m, n}, dom] | matrix variable in $\mathbb{R}^{m n}$                            |
| ℛ                     | vector variable restricted to the geometric region $\mathcal{R}$ |

* By default, all variables are assumed to be real.

* The following options can be given:

|                    |                  |                                                  |
| :----------------- | :--------------- | :----------------------------------------------- |
| AccuracyGoal       | Automatic        | number of digits of final accuracy sought        |
| EvaluationMonitor  | None             | expression to evaluate whenever f is evaluated   |
| MaxIterations      | Automatic        | maximum number of iterations to use              |
| Method             | Automatic        | method to use                                    |
| PrecisionGoal      | Automatic        | number of digits of final precision sought       |
| StepMonitor        | None             | expression to evaluate whenever a step is taken  |
| WorkingPrecision   | MachinePrecision | the precision used in internal computations      |

* The settings for ``AccuracyGoal`` and ``PrecisionGoal`` specify the number of digits to seek in both the value of the position of the maximum, and the value of the function at the maximum.

* ``NMaxValue`` continues until either of the goals specified by ``AccuracyGoal`` or ``PrecisionGoal`` is achieved.

* The methods for ``NMaxValue`` fall into two classes. The first class of guaranteed methods uses properties of the problem so that, when the method converges, the maximum found is guaranteed to be global. The second class of heuristic methods uses methods that may include multiple local searches, commonly adjusted by some stochasticity, to home in on a global maximum. These methods often do find the global maximum, but are not guaranteed to do so.

* Methods that are guaranteed to give a global maximum when they converge to a solution include:

|          |                                                       |
| -------- | ----------------------------------------------------- |
| "Convex" | use only convex methods                               |
| "MOSEK"  | use the commercial MOSEK library for convex problems  |
| "Gurobi" | use the commercial Gurobi library for convex problems |
| "Xpress" | use the commercial Xpress library for convex problems |

* Heuristic methods include:

|                         |                                                                         |
| ----------------------- | ----------------------------------------------------------------------- |
| "NelderMead"            | simplex method of Nelder and Mead                                       |
| "DifferentialEvolution" | use differential evolution                                              |
| "SimulatedAnnealing"    | use simulated annealing                                                 |
| "RandomSearch"          | use the best local minimum found from multiple random starting points   |
| "Couenne"               | use the Couenne library for non-convex mixed-integer nonlinear problems |

---

## Examples (57)

### Basic Examples (5)

Find the global maximum value of a univariate function:

```wl
In[1]:= NMaxValue[-2x ^ 2 - 3x + 5, x]

Out[1]= 6.125
```

---

Find the global maximum value of a multivariate function:

```wl
In[1]:= NMaxValue[1 - (x y - 3) ^ 2, {x, y}]

Out[1]= 1.
```

---

Find the global maximum value of a function subject to constraints:

```wl
In[1]:= NMaxValue[{x - 2y, x ^ 2 + y ^ 2 ≤ 1}, {x, y}]

Out[1]= 2.23607
```

---

Find the global maximum value of a function over a geometric region:

```wl
In[1]:= MaxValue[x + y, {x, y}∈Disk[]]

Out[1]= Sqrt[2]
```

---

Find the global maximum value of a function over a geometric region:

```wl
In[1]:= NMaxValue[x + y, {x, y}∈Disk[]]

Out[1]= 1.41421
```

### Scope (34)

#### Basic Uses (12)

Maximize $x+2y$ subject to constraints $x^2+2y^2\leq 3,x+y==2,x\geq 1$ :

```wl
In[1]:= NMaxValue[{x + 2y, x^2 + 2y^2 ≤ 3, x + y == 2, x ≥ 1}, {x, y}]

Out[1]= 3.
```

---

Several linear inequality constraints can be expressed with ``VectorGreaterEqual`` :

```wl
In[1]:= NMaxValue[{-x - y, VectorGreaterEqual[{{x + 2y, x}, {3, -1}}]}, {x, y}]

Out[1]= -1.
```

Use esc`` v>= ``esc or ``\[VectorGreaterEqual]`` to enter the vector inequality sign ``\[VectorGreaterEqual]`` :

```wl
In[2]:= NMaxValue[{-x - y, {x + 2y, x}\[VectorGreaterEqual]{3, -1}}, {x, y}]

Out[2]= -1.
```

An equivalent form using scalar inequalities:

```wl
In[3]:= NMaxValue[{-x - y, x + 2y ≥ 3, x ≥ -1}, {x, y}]

Out[3]= -1.
```

---

Use a vector variable $v=\{x,y\}$:

```wl
In[1]:= NMaxValue[{{-1, -1}.v, {{1, 2}, {1, 0}}.v\[VectorGreaterEqual]{3, -1}}, v]

Out[1]= -1.
```

---

The inequality $a.x+b\unicode{f435}0$ may not be the same as $a.x\unicode{f435}-b$ due to possible threading in $a.x+b$:

```wl
In[1]:= NMaxValue[{{-1, -1}.x, {{1, 2}, {1, 0}}.x + {-3, 1}\[VectorGreaterEqual]0}, x]

Out[1]= -3.

In[2]:= NMaxValue[{{-1, -1}.x, {{1, 2}, {1, 0}}.x\[VectorGreaterEqual]{3, -1}}, x]

Out[2]= -1.
```

To avoid unintended threading in $a.x+b$, use ``Inactive[Plus]``:

```wl
In[3]:= NMaxValue[{{-1, -1}.x, {{1, 2}, {1, 0}}.x + {-3, 1}\[VectorGreaterEqual]0}, x]

Out[3]= -1.
```

---

Use constant parameter equations to avoid unintended threading in $a.x+b$ :

```wl
In[1]:= parEqs = {a == $$\{\{1,2\},\{1,0\}\}$$, b == {3, -1}};

In[2]:= NMaxValue[{{-1, -1}.x, {parEqs, a.x + b\[VectorGreaterEqual]0}}, x]

Out[2]= 1.
```

---

``VectorGreaterEqual`` represents a conic inequality with respect to the ``"NonNegativeCone"`` :

```wl
In[1]:= constraint = VectorGreaterEqual[{{{1, 2}, {1, 0}}.v, {3, -1}}, "NonNegativeCone"]

Out[1]= {{1, 2}, {1, 0}}.vUnderscript[\[VectorGreaterEqual], "NonNegativeCone"]{3, -1}
```

To explicitly specify the dimension of the cone, use ``{"NonNegativeCone", n}``:

```wl
In[2]:= constraint = VectorGreaterEqual[{{{1, 2}, {1, 0}}.v, {3, -1}}, {"NonNegativeCone", 2}]

Out[2]= {{1, 2}, {1, 0}}.vUnderscript[\[VectorGreaterEqual], {"NonNegativeCone", 2}]{3, -1}
```

Find the solution:

```wl
In[3]:= NMaxValue[{{-1, -1}.v, constraint}, v]

Out[3]= -1.
```

---

Maximize $x+2y$ subject to the constraint $x^2+y^2\leq 9$ :

```wl
In[1]:= NMaxValue[{x + 2y, x^2 + y^2 ≤ 9}, {x, y}]

Out[1]= 6.7082
```

Specify the constraint $x^2+y^2\leq 9$ using a conic inequality with ``"NormCone"``:

```wl
In[2]:= constraint = VectorGreaterEqual[{{x, y, 3}, 0}, {"NormCone", 3}]

Out[2]= {x, y, 3}Underscript[\[VectorGreaterEqual], {"NormCone", 3}]0
```

Find the solution:

```wl
In[3]:= NMaxValue[{x + 2y, constraint}, {x, y}]

Out[3]= 6.7082
```

---

Maximize the function $\sum _{i=1}^3x_i$ subject to the constraint $a.x\unicode{f435}b, x_1\geq 1,x\in \mathbb{R}^3$ :

```wl
In[1]:= {a, b} = {-{{1, 1, 0}, {0, 1, 1}, {1, 1, 1}}, {2, 2, -1}};
```

Use ``Indexed`` to access components of a vector variable, e.g. $x_1$ :

```wl
In[2]:= NMaxValue[{Total[x], a.x\[VectorGreaterEqual]b, Indexed[x, 1] ≥ 1}, x]

Out[2]= 1.
```

---

Use ``Vectors[n, dom]`` to specify the dimension and domain of a vector variable when it is ambiguous:

```wl
In[1]:=
NMaxValue[{Indexed[x, {1}] + 2Indexed[x, {2}], $$\left(
\begin{array}{cc}
 -x_1 & 1 \\
 1 & -x_2 \\
\end{array}
\right)$$Underscript[\[VectorGreaterEqual], {"SemidefiniteCone", 2}]0}, x]
```

NMaxValue::itdim: The dimensionality of variable x is not well specified.

```wl
Out[1]= NMaxValue[{Indexed[x, {1}] + 2 Indexed[x, {2}], {{-Indexed[x, {1}], 1}, {1, -Indexed[x, {2}]}}Underscript[\[VectorGreaterEqual], {"SemidefiniteCone", 2}]0}, x]

In[2]:=
NMaxValue[{Indexed[x, {1}] + Indexed[2x, {2}], $$\left(
\begin{array}{cc}
 -x_1 & 1 \\
 1 & -x_2 \\
\end{array}
\right)$$Underscript[\[VectorGreaterEqual], {"SemidefiniteCone", 2}]0}, x∈Vectors[2, Reals]]

Out[2]= -2.82843
```

---

Specify non-negative constraints using ``NonNegativeReals`` ($\mathbb{R}_{\geq \, 0}$):

```wl
In[1]:= NMaxValue[{{1, 1}.x, {Indexed[x, {1}], Indexed[x, {2}], 1}Underscript[\[VectorGreaterEqual], {"NormCone", 3}]0}, x∈Vectors[2, NonNegativeReals]]

Out[1]= 1.41422
```

An equivalent form using vector inequality $x\unicode{f435}0$:

```wl
In[2]:= NMaxValue[{{1, 1}.x, {Indexed[x, {1}], Indexed[x, {2}], 1}Underscript[\[VectorGreaterEqual], {"NormCone", 3}]0, x\[VectorGreaterEqual]0}, x]

Out[2]= 1.41422
```

---

Specify non-positive constraints using ``NonPositiveReals`` ($\mathbb{R}_{\leq \, 0}$):

```wl
In[1]:= NMaxValue[{Total[v], {1, 2}.v\[VectorLessEqual]-3}, v∈Vectors[2, NonPositiveReals]]

Out[1]= -1.5
```

An equivalent form using vector inequalities:

```wl
In[2]:= NMaxValue[{Total[v], {1, 2}.v\[VectorLessEqual]-3, v\[VectorLessEqual]0}, v]

Out[2]= -1.5
```

---

``Or`` constraints can be specified:

```wl
In[1]:= NMaxValue[{x + y, x ^ 2 + y ^ 2 ≤ 1 || (x + 2) ^ 2 + (y + 2) ^ 2 ≤ 1}, {x, y}]//Quiet

Out[1]= 1.41421
```

#### Domain Constraints (4)

Specify integer domain constraints using ``Integers``:

```wl
In[1]:= NMaxValue[{x + y, x + 2y ≤ 3, x ≤ 1}, {x, y∈Integers}]

Out[1]= 2.
```

---

Specify integer domain constraints on vector variables using ``Vectors[n, Integers]`` :

```wl
In[1]:=
NMaxValue[{Total[x], a == {{1, 2}, {1, 0}}, b == {-3, 2}, 
	a.x\[VectorLessEqual]b}, x∈Vectors[2, Integers]]

Out[1]= -1.
```

---

Specify non-negative integer domain constraints using ``NonNegativeIntegers`` ($\mathbb{Z}_{\geq \, 0}$):

```wl
In[1]:= NMaxValue[{x + 2y, $$x^2+2y^2\leq  3$$, x + y == 2, x∈NonNegativeIntegers}, {x, y}]

Out[1]= 3.
```

---

Specify non-positive integer domain constraints using ``NonPositiveIntegers`` ($\mathbb{Z}_{\leq \, 0}$):

```wl
In[1]:= NMaxValue[{x - y, x + 2y ≥ 3, x∈NonPositiveIntegers}, {x, y}]

Out[1]= -1.5
```

#### Region Constraints (3)

Find the maximum value of a linear function of a vector at $x$ in $\mathbb{R}^3$ with $\| x\| \leq 1$ :

```wl
In[1]:= NMaxValue[x.{1, 2, 3}, x∈Ball[]]

Out[1]= 3.74165
```

---

Find the maximum value over a region:

```wl
In[1]:=
t = RotationTransform[{{0, 0, 1}, {1, 1, 1}}];
ℛ = TransformedRegion[Ellipsoid[{0, 0, 0}, {1, 2, 3}], t];

In[2]:= NMaxValue[z, {x, y, z}∈ℛ]

Out[2]= 2.16025
```

---

Find the maximum possible distance between two points constrained to be in two different regions:

```wl
In[1]:=
Subscript[ℛ, 1] = Disk[];
Subscript[ℛ, 2] = Line[{{-2, 0}, {0, 2}}];

In[2]:= NMaxValue[EuclideanDistance[{x, y}, {u, v}], {{x, y}∈Subscript[ℛ, 1], {u, v}∈Subscript[ℛ, 2]}]

Out[2]= 3.
```

#### Linear Problems (5)

With linear objectives and constraints, when a maximum is found it is global:

```wl
In[1]:= NMaxValue[{x + y, 3x + 2y ≤ 7 && x + 2y ≤ 6 && x ≥ 0 && y ≥ 0}, {x, y}]

Out[1]= 3.25
```

---

The constraints can be equality and inequality constraints:

```wl
In[1]:= NMaxValue[{y - x, x + y + z == 1 / 2, x - 2z == 1, 2x - y ≥ 1}, {x, y, z}]

Out[1]= -0.428571
```

---

Use ``Equal`` to express several equality constraints at once:

```wl
In[1]:= NMaxValue[{y - x, {x + y, x - y} == {1 / 2, 1}}, {x, y}]

Out[1]= -1.
```

An equivalent form using several scalar equalities:

```wl
In[2]:= NMaxValue[{y - x, x + y == 1 / 2, x - y == 1}, {x, y}]

Out[2]= -1.
```

---

Use ``VectorLessEqual`` to express several ``LessEqual`` inequality constraints at once:

```wl
In[1]:= NMaxValue[{x + y, VectorGreaterEqual[{{x - 2y, -x + y}, {3, 2}}]}, {x, y}]

Out[1]= -12.
```

Use esc`` v<= ``esc to enter the vector inequality in a compact form:

```wl
In[2]:= NMaxValue[{x + y, {x - 2y, -x + y}\[VectorGreaterEqual]{3, 2}}, {x, y}]

Out[2]= -12.
```

An equivalent form using scalar inequalities:

```wl
In[3]:= NMaxValue[{x + y, {x - 2y ≥ 3, -x + y ≥ 2}}, {x, y}]

Out[3]= -12.
```

---

Use ``Interval`` to specify bounds on a variable:

```wl
In[1]:= NMaxValue[{x + y, x + 2y ≥ 3}, {x∈Interval[{-1, 2}], y∈Interval[{-1, 1}]}]

Out[1]= 3.
```

#### Convex Problems (4)

Use ``"NonNegativeCone"`` to specify linear functions of the form $a.x\unicode{f435}b$:

```wl
In[1]:=
NMaxValue[{Total[x], 
	VectorGreaterEqual[{{{-1, -1 / 2}, {1, -1 / 2}, {0, 1}}.x, {-1 / 2, -1 / 2, -1 / 2}}, {"NonNegativeCone", 3}]}, x]

Out[1]= 1.
```

Use esc`` v>= ``esc to enter the vector inequality in a compact form:

```wl
In[2]:= NMaxValue[{Total[x], {{-1, -(1/2)}, {1, -(1/2)}, {0, 1}}.x\[VectorGreaterEqual]-(1/2)}, x]

Out[2]= 1.
```

---

Find the maximum value of a concave quadratic function subject to a set of convex quadratic constraints:

```wl
In[1]:= res = NMaxValue[{-((x - 1)^2 + y^2), x^2 + (y^2/2) ≤ (1/2), 2x^2 ≤ 3y}, {x, y}]

Out[1]= -0.198003
```

Plot the region:

```wl
In[2]:= Plot3D[-((x - 1)^2 + y^2), {x, -1, 1}, {y, 0, 1}, ...]

Out[2]= [image]
```

---

Find the maximum value of $-(10 x+11y)$ such that $\left(
\begin{array}{cc}
 x & 1 \\
 1 & y \\
\end{array}
\right)$ is positive semidefinite:

```wl
In[1]:=
NMaxValue[{-(10x + 11 y), (⁠|   |   |
| - | - |
| x | 1 |
| 1 | y |⁠)Underscript[\[VectorGreaterEqual], {"SemidefiniteCone", 2}]0}, {x, y}]

Out[1]= -20.9762
```

Show the plot of the objective function:

```wl
In[2]:= Plot3D[-(10x + 11y), {x, 0, 5}, {y, 0, 5}, ...]

Out[2]= [image]
```

---

Find the maximum value of a concave objective function $\text{Log}[x+y]$ such that $\left(
\begin{array}{cc}
 x+y & 1 \\
 1 & x-y \\
\end{array}
\right)$ is positive semidefinite and $1\leq x\leq 10,-1\leq y\leq 1$ :

```wl
In[1]:=
res = NMaxValue[{Log[x + y], (⁠|       |       |
| ----- | ----- |
| x + y | 1     |
| 1     | x - y |⁠)Underscript[\[VectorGreaterEqual], {"SemidefiniteCone", 2}]0, 1 ≤ x ≤ 10, -1 ≤ y ≤ 1}, {x, y}]

Out[1]= 2.3979
```

Plot the region and the maximizing point:

```wl
In[2]:= Show[Plot3D[Log[x + y], {x, 1, 10}, {y, -1, 1}, ...], Graphics3D[{Red, PointSize[0.05], Point[{10, 1, res}]}]]

Out[2]= [image]
```

#### Transformable to Convex (3)

Maximize the quasi-concave function $x y$ subject to inequality and norm constraints. The objective is quasi-concave because it is a product of two non-negative affine functions:

```wl
In[1]:= NMaxValue[{x y, x ≥ 0, y ≥ 0, Norm[{x - 1, y - 2}] ≤ 1}, {x, y}]

Out[1]= 4.68174
```

The maximization is solved by minimizing the negative of the objective, $-x y$, that is quasi-convex. Quasi-convex problems can be solved as parametric convex optimization problems for the parameter $\alpha$ :

```wl
In[2]:=
pfun = ParametricConvexOptimization[0, {-x y ≤ α, x ≥ 0, y ≥ 0, 
	Norm[{x - 1, y - 2}] ≤ 1}, {x, y}, α, "PrimalMinimizerRules"]

Out[2]= ParametricFunction[<>]
```

Plot the objective as a function of the level-set $\alpha$ :

```wl
In[3]:=
fun[α_ ? NumericQ] := -x * y /. pfun[α];
Plot[fun[α], {α, -4.68, 0}, PlotRange -> All]

Out[3]= [image]
```

For a level-set value between the interval $[-4.682,-4.68]$, the smallest objective is found:

```wl
In[4]:= pfun[-4.68]

Out[4]= {x -> 1.81699, y -> 2.57617}
```

The problem becomes infeasible when the level-set value is increased:

```wl
In[5]:= pfun[-4.69]
```

ParametricConvexOptimization::nsolc: There are no points that satisfy the constraints.

```wl
Out[5]= {x -> Indeterminate, y -> Indeterminate}
```

---

Maximize $x^2-y^2$ subject to the constraint $\|\{x,y\}\|\text{$<$=}2$. The objective is not convex but can be represented by a difference of convex function $f(x,y)-g(x,y)$ where $f$ and $g$ are convex functions:

```wl
In[1]:= res = NMaxValue[{x ^ 2 - y ^ 2, Norm[{x, y}] <= 2}, {x, y}]

Out[1]= 4.
```

Plot the region and the minimizing point:

```wl
In[2]:=
Show[Plot3D[x ^ 2 - y ^ 2, {x, -2, 2}, {y, -2, 2}, ...], Graphics3D[
	{PointSize[0.04], Point[{2, 0, res}]}]]

Out[2]= [image]
```

---

Maximize $x^2+y^2+x+y$ subject to the constraints $1\text{$<$=}\|\{x,y\}\|\text{$<$=}2$. The constraint $1\text{$<$=}\|\{x,y\}\|$ is not convex but can be represented by a difference of convex constraint $f(x,y)-g(x,y)\text{$<$=}0$ where $f$ and $g$ are convex functions:

```wl
In[1]:= res = NMaxValue[{x ^ 2 + y ^ 2 + x + y, 1 <= Norm[{x, y}] <= 2}, {x, y}]

Out[1]= 6.82843
```

Plot the region and the minimizing point:

```wl
In[2]:=
Show[Plot3D[x ^ 2 + y ^ 2 + x + y, {x, -2, 2}, {y, -2, 2}, ...], Graphics3D[
	{PointSize[0.04], Point[{Sqrt[2], Sqrt[2], res}]}]]

Out[2]= [image]
```

#### General Problems (3)

Maximize a linear objective subject to nonlinear constraints:

```wl
In[1]:= res = NMaxValue[{x + y, Sin[2x] + Cos[3y] ≤ 1, Norm[{x, y}] ≤ 2}, {x, y}]

Out[1]= 2.82843
```

Plot it:

```wl
In[2]:= Show[Plot3D[x + y, {x, -2, 2}, {y, -2, 2}, ...], Graphics3D[{Blue, PointSize[0.03], Point[{1.414, 1.414, res}]}]]

Out[2]= [image]
```

---

Maximize a nonlinear objective subject to linear constraints:

```wl
In[1]:= res = NMaxValue[{Sin[2x] + Cos[x], -2 ≤ x ≤ 3}, x]

Out[1]= 1.76017
```

Plot the objective and the maximizing point:

```wl
In[2]:= Plot[Sin[2x] + Cos[x], {x, -2, 3}, Rule[...]]

Out[2]= [image]
```

---

Maximize a nonlinear objective subject to nonlinear constraints:

```wl
In[1]:= res = NMaxValue[{Sin[x^2 + y], Sin[2x] + Cos[3y] ≤ 1, Norm[{x^2, y}] ≤ 2}, {x, y}]

Out[1]= 1.
```

Plot it:

```wl
In[2]:= Show[Plot3D[Sin[x^2 + y], {x, -2, 2}, {y, -2, 2}, ...], Graphics3D[{Blue, PointSize[0.03], Point[{0.719, 1.054, res}]}]]

Out[2]= [image]
```

### Options (7)

#### AccuracyGoal & PrecisionGoal (2)

This enforces convergence criteria $\left\|x_k-x^*\right\| \leq  \max \left(10^{-9},10^{-8}\left\|x_k\right\|\right)$ and $\text{$\triangledown $f}\left(x_k\right)\leq 10^{-9}$ :

```wl
In[1]:= NMaxValue[-Sin[Tan[x] / 2], x, AccuracyGoal -> 9, PrecisionGoal -> 8]

Out[1]= 1.
```

---

This enforces convergence criteria $\left\|x_k-x^*\right\| \leq  \max \left(10^{-20},10^{-18}\left\|x_k\right\|\right)$ and $\text{$\triangledown $f}\left(x_k\right)\leq 10^{-20}$, which is not achievable with the default machine-precision computation:

```wl
In[1]:= NMaxValue[-Sin[Tan[x] / 2], x, AccuracyGoal -> 20, PrecisionGoal -> 18]
```

NMaxValue::cvmit: Failed to converge to the requested accuracy or precision within 100 iterations.

```wl
Out[1]= 1.
```

Setting a high ``WorkingPrecision`` makes the process convergent:

```wl
In[2]:= NMaxValue[-Sin[Tan[x] / 2], x, AccuracyGoal -> 20, PrecisionGoal -> 18, WorkingPrecision -> 40]

Out[2]= 1.000000000000000000000000000000000000000
```

#### EvaluationMonitor (1)

Record all the points evaluated during the solution process of a function with a ring of minima:

```wl
In[1]:=
f[x_, y_] := -(x ^ 2 + y ^ 2 - 16) ^ 2;
{sol, pts} = Reap[
	NMaxValue[f[x, y], {{x, -5, 5}, {y, -5, 5}},   Method -> "DifferentialEvolution", EvaluationMonitor :> Sow[{x, y}]]];
```

Plot all the visited points that are close in objective function value to the final solution:

```wl
In[2]:=
ContourPlot[f[x, y], {x, -5, 5}, {y, -5, 5}, 
	Epilog -> Map[Point, Cases[First[pts], x_ /; Abs[f@@x - sol] ≤ .05]], Contours -> -4Range[0, 10] ^ 2]

Out[2]= [image]
```

#### Method (2)

Some methods may give suboptimal results for certain problems:

```wl
In[1]:= objective[x_] := Sin[.4 x ^ 2] + .7Cos[.6 x] x; NMaxValue[{objective[x], -11 < x < 11}, {x}, Method -> #]& /@ {"NelderMead", "SimulatedAnnealing"}

Out[1]= {4.79377, 4.79377}
```

The automatically chosen method gives the optimal solution for this problem:

```wl
In[2]:= NMaxValue[{objective[x], -11 < x < 11}, {x}]

Out[2]= 8.41677
```

The automatic method choice for this nonconvex problem is method ``"Couenne"`` :

```wl
In[3]:= maxVal = NMaxValue[{objective[x], -11 < x < 11}, {x}, Method -> "Couenne"]

Out[3]= 8.41677
```

Plot the objective function along with the global maximum value:

```wl
In[4]:= Plot[{objective[x], maxVal}, {x, -11, 11}]

Out[4]= [image]
```

---

Use method ``"NelderMead"`` for problems with many variables when speed is essential:

```wl
In[1]:= n = 50;AbsoluteTiming[NMaxValue[Sum[-i * (x[i] - i) ^ 2 + Sin[x[i]], {i, 1, n}], Table[x[i], {i, 1, n}], Method -> "NelderMead"]]

Out[1]= {0.0556807, 0.373089}
```

#### StepMonitor (1)

Steps taken by ``NMaxValue`` in finding the maximum of a function:

```wl
In[1]:=
pts = Reap[NMaxValue[-(1 - x) ^ 2 - 100(-x ^ 2 - y) ^ 2, {x, y}, Method -> "NelderMead", StepMonitor :> Sow[{x, y}]]][[2, 1]];
pts  = Join[{{-1.2, 1}}, pts];

In[2]:= ContourPlot[-(1 - x) ^ 2 - 100(-x ^ 2 - y) ^ 2, {x, -1.3, 1.5}, {y, -1.5, 1.4}, Epilog -> {Arrow[pts], Point[pts]}, Contours -> Table[-10 ^ -i, {i, -2, 10}], ColorFunction -> (Hue[0.5 * (Log[10, -#] + 11) / 13]&), ColorFunctionScaling -> False]

Out[2]= [image]
```

#### WorkingPrecision (1)

With the working precision set to $20$, by default ``AccuracyGoal`` and ``PrecisionGoal`` are set to $\frac{20}{2}$ :

```wl
In[1]:= NMaxValue[Cos[x ^ 2 - 3 y] + Sin[x ^ 2 + y ^ 2], {x, y}, WorkingPrecision -> 20]

Out[1]= 2.0000000000000000000
```

### Applications (4)

#### Geometry Problems (2)

Find the maximum $r$ such that the triangle and ellipse still intersect:

```wl
In[1]:=
Subscript[ℛ, 1] = Triangle[{{0, 0}, {1, 0}, {0, 1}}];
Subscript[ℛ, 2] = Circle[{1, 1}, {2r, r}];

In[2]:= Subscript[r, opt] = NMaxValue[{r, {x, y}∈Subscript[ℛ, 1] && {x, y}∈Subscript[ℛ, 2]}, {r, x, y}]

Out[2]= 1.11803
```

Plot the resulting regions:

```wl
In[3]:= Graphics[{Blue, Subscript[ℛ, 1], Subscript[ℛ, 2] /. r -> Subscript[r, opt]}]

Out[3]= [image]
```

---

Find the largest radius $r$ for $n$ non-overlapping circles and their centers that can be contained in a $[-1,1] \times  [-1,1]$ square. The box constraint can be represented as $-1\text{$<$=}\left\|c_i\|_{\infty }+r\text{$<$=}1\right., i=1,\text{...},n$ :

```wl
In[1]:=
n = 3;
boxConstraint = Table[-1 <= Norm[c[i], Infinity] + r <= 1, {i, n}];
```

The circles must not overlap:

```wl
In[2]:=
nonOverlapConstraint = Table[Norm[c[i] - c[j]] >= 2r, 
	{i, 1, n}, {j, i + 1, n}];
```

Collect the variables:

```wl
In[3]:= vars = Append[Table[Element[c[i], Vectors[2, Reals]], {i, n}], Element[r, Reals]];
```

Find the maximum radius $r_{\max }$ :

```wl
In[4]:= NMaxValue[{r, boxConstraint, nonOverlapConstraint, r >= 0}, vars]

Out[4]= 0.508666
```

#### Investment Problems (1)

Find the maximum return on investment possible by allocating \$250,000 of capital to purchase two stocks and a bond. Let $x_1,x_2$ be the amount to invest in the two stocks and let $x_3$ be the amount to invest in the bond:

```wl
In[1]:= capitalConstraint = Subscript[x, 1] + Subscript[x, 2] + Subscript[x, 3] ≤ 250000;
```

The amount invested in the utilities stock cannot be more than \$40,000:

```wl
In[2]:= utilityStockConstraint = 0 ≤ Subscript[x, 1] ≤ 40000;
```

The amount invested in the bond must be at least \$70,000:

```wl
In[3]:= bondConstraint = Subscript[x, 3] ≥ 70000;
```

The total amount invested in the two stocks must be at least half the total amount invested:

```wl
In[4]:= investmentConstraint = Subscript[x, 1] + Subscript[x, 2] ≥ (Subscript[x, 1] + Subscript[x, 2] + Subscript[x, 3]) / 2;
```

The stocks pay an annual dividend of 9% and 4%, respectively. The bond pays a dividend of 5%. The total return on investment is:

```wl
In[5]:= totalProfit = (0.09Subscript[x, 1] + 0.04Subscript[x, 2] + 0.05Subscript[x, 3]);
```

The cost of executing the transactions is $\sum _{ i=1}\left|x_i|^{0.5}\right.$and must not exceed \$1000:

```wl
In[6]:= costConstraint = Sum[Abs[Subscript[x, i]]^0.5, {i, 3}] ≤ 1000;
```

The maximum profit attainable with the specified constraints is:

```wl
In[7]:=
NMaxValue[{totalProfit, 
	capitalConstraint, utilityStockConstraint, bondConstraint, investmentConstraint, costConstraint}, {Subscript[x, 1], Subscript[x, 2], Subscript[x, 3]}]

Out[7]= 13250.
```

#### Iterated Optimization (1)

Find the value for the parameter $\alpha$ associated with ellipse $x^2/\alpha ^2+\alpha ^2y^2\leq 1$ such that the maximum value of $(x-1)(y-2)$ is minimized:

```wl
In[1]:=
pfun[α_ ? NumericQ] := NMaxValue[{(x - 1)(y - 2), 
	 Norm[{x / α, α y}] ≤ 1}, {x, y}]
```

Show the distance as a function of $\alpha$ :

```wl
In[2]:= Plot[pfun[α], {α, 0.1, 2}, PlotPoints -> 2, PlotRange -> All]

Out[2]= [image]
```

Find optimal parameter $\alpha$ that minimizes the maximum value of the objective :

```wl
In[3]:= FindMinimum[{pfun[α], 0.1 ≤ α ≤ 1}, {α, .5}, AccuracyGoal -> 3]

Out[3]= {4.5, {α -> 0.706994}}
```

### Properties & Relations (7)

``NMaximize`` gives the maximum value and rules for the maximizing values of the variables:

```wl
In[1]:= NMaximize[{y - x.x , Norm[x + y] ≤ 1}, {Element[x, Vectors[2, Reals]], y}]

Out[1]= {0.832107, {x -> {-0.25, -0.25}, y -> 0.957107}}
```

``NArgMax`` gives a list of the maximizing values:

```wl
In[2]:= NArgMax[{y - x.x , Norm[x + y] ≤ 1}, {Element[x, Vectors[2, Reals]], y}]

Out[2]= {{-0.25, -0.25}, 0.957107}
```

``NMaxValue`` gives only the maximum value:

```wl
In[3]:= NMaxValue[{y - x.x , Norm[x + y] ≤ 1}, {Element[x, Vectors[2, Reals]], y}]

Out[3]= 0.832107
```

---

Maximizing a function ``f`` is equivalent to minimizing ``-f`` :

```wl
In[1]:= NMinValue[{x ^ 2 + y ^ 2, x + 2 y  ≥ 3}, {x, y}]

Out[1]= 1.8

In[2]:= -NMaxValue[{-(x ^ 2 + y ^ 2), x + 2 y  ≥ 3}, {x, y}]

Out[2]= 1.8
```

---

For convex problems, ``ConvexOptimization`` may be used to obtain additional solution properties:

```wl
In[1]:= NMaxValue[{-(x ^ 2 + y ^ 2), x + 2 y  ≥ 3}, {x, y}]

Out[1]= -1.8

In[2]:= ConvexOptimization[x ^ 2 + y ^ 2, x + 2 y  ≥ 3, {x, y}, "PrimalMinimumValue"]

Out[2]= 1.8
```

Get the dual solution:

```wl
In[3]:= ConvexOptimization[x ^ 2 + y ^ 2, x + 2 y  ≥ 3, {x, y}, "DualMaximizer"]

Out[3]= {{1.2}}
```

---

For convex problems with parameters, using ``ParametricConvexOptimization`` gives a ``ParametricFunction`` :

```wl
In[1]:= pfun = ParametricConvexOptimization[-(x + 2 y), (x - α) ^ 2 + (y - β) ^ 2 ≤ 1, {x, y}, {α, β}, "PrimalMinimumValue"]

Out[1]= ParametricFunction[<>]
```

The ``ParametricFunction`` may be evaluated for values of the parameter:

```wl
In[2]:= {pfun[0., 0.], pfun[.5, 1.], pfun[2., 1.]}

Out[2]= {-2.23606, -4.73607, -6.23607}
```

Define a function for the parametric problem using ``NMaxValue`` :

```wl
In[3]:=
fun[α_ ? NumericQ, β_ ? NumericQ]  := NMaxValue[{x + 2 y, (x - α) ^ 2 + (y - β) ^ 2 ≤ 1}, {x, y}];
{fun[0., 0.], fun[.5, 1.], fun[2., 1.]}

Out[3]= {2.23607, 4.73607, 6.23607}
```

Compare the speed of the two approaches:

```wl
In[4]:=
testData = RandomReal[{0, 4}, {1000, 2}];
{First[AbsoluteTiming[pfun @@@testData]], First[AbsoluteTiming[fun @@@ testData]]}

Out[4]= {1.27059, 2.09324}
```

Derivatives of the ``ParametricFunction`` can also be computed:

```wl
In[5]:= D[pfun[α, β], {{α, β}}] /. {α -> 2., β -> 1.}

Out[5]= {-1.00001, -2.}
```

---

For convex problems with parametric constraints, ``RobustConvexOptimization`` finds an optimum that works for all possible values of the parameters:

```wl
In[1]:=
{rMinValue, rOptimizer} = RobustConvexOptimization[-(2x + 3y), x + y ≤ α && x - y ≥ β && -1 ≤ α ≤ 1 && 0 ≤ β ≤ 1, {x, y}, {α, β}, {"PrimalMinimumValue", "PrimalMinimizerRules"}];
rMaxValue = -rMinValue;
{rMaxValue, rOptimizer}

Out[1]= {-3., {x -> 0., y -> -1.}}
```

``NMaxValue`` may find a larger maximum value for particular values of the parameters:

```wl
In[2]:= NMaxValue[{2x + 3y, x + y ≤ α && x - y ≥ β && α == 0 && β == 0}, {x, y}]

Out[2]= 0.
```

The maximizer that gives this value does not satisfy the constraints for all allowed values of $\alpha$ and $\beta$ :

```wl
In[3]:= maximizer = Last[NMaximize[{2x + 3y, x + y ≤ α && x - y ≥ β && α == 0 && β == 0}, {x, y}]]

Out[3]= {x -> 0., y -> 0.}

In[4]:= (x + y ≤ α && x - y ≥ β /. maximizer) /. {α -> 1, β -> 1}

Out[4]= False
```

The maximum value found for particular values of the parameters is greater than or equal to the robust maximum:

```wl
In[5]:=
fun[α_, β_] /; (-1 ≤ α ≤ 1 && 0 ≤ β ≤ 1)  := 
	NMaxValue[{2x + 3y, x + y ≤ α && x - y ≥ β}, {x, y}];

In[6]:=
testValues = RandomVariate[UniformDistribution[{{-1, 1}, {0, 1}}], 100];
AllTrue[testValues, ((fun@@#) ≥ rMaxValue)&]

Out[6]= True
```

---

``NMaxValue`` can solve linear programming problems:

```wl
In[1]:= NMaxValue[{2x + 3y - z, 1 ≤ x + y + z ≤ 2 && 1 ≤ x - y + z ≤ 2 && x - y - z == 3}, {x, y, z}]

Out[1]= 7.5
```

``LinearProgramming`` can be used to solve the same problem given in matrix notation:

```wl
In[2]:=
c = {2., 3., -1.};
m = {{1, 1, 1}, {1, 1, 1}, {1, -1, 1}, {1, -1, 1}, {1, -1, -1}};
b = {{1, 1}, {2, -1}, {1, 1}, {2, -1}, {3, 0}};

In[3]:= c.LinearProgramming[-c, m, b, -Infinity]

Out[3]= 7.5
```

---

Use ``RegionBounds`` to compute the bounding box:

```wl
In[1]:=
f = x^4 + 3. x^2 y + 2. x^2 y^2 - y^3 + y^4 ≤ 0;
ℛ = ImplicitRegion@@{f, {x, y}}

Out[1]= ImplicitRegion[x^4 + 3. x^2 y + 2. x^2 y^2 - y^3 + y^4 ≤ 0, {x, y}]

In[2]:= RegionBounds[ℛ]

Out[2]= {{-0.880086, 0.880086}, {-0.5625, 1.}}
```

Use ``NMaxValue`` and ``NMinValue`` to compute the same bounds:

```wl
In[3]:= {{x1, x2}, {y1, y2}} = {NMinValue[#, {x, y}∈ℛ], NMaxValue[#, {x, y}∈ℛ]}& /@ {x, y}

Out[3]= {{-0.880086, 0.880086}, {-0.5625, 1.}}

In[4]:= Show[{Graphics[{LightBlue, Rectangle[{x1, y1}, {x2, y2}]}], RegionPlot[f, {x, x1 - 1, x2 + 1}, {y, y1 - 1, y2 + 1}]}]

Out[4]= [image]
```

## See Also

* [`NMinimize`](https://reference.wolfram.com/language/ref/NMinimize.en.md)
* [`Maximize`](https://reference.wolfram.com/language/ref/Maximize.en.md)
* [`FindMinimum`](https://reference.wolfram.com/language/ref/FindMinimum.en.md)
* [`NArgMin`](https://reference.wolfram.com/language/ref/NArgMin.en.md)
* [`NMinValue`](https://reference.wolfram.com/language/ref/NMinValue.en.md)
* [`FindFit`](https://reference.wolfram.com/language/ref/FindFit.en.md)
* [`LinearOptimization`](https://reference.wolfram.com/language/ref/LinearOptimization.en.md)
* [`ConvexOptimization`](https://reference.wolfram.com/language/ref/ConvexOptimization.en.md)
* [`GeometricOptimization`](https://reference.wolfram.com/language/ref/GeometricOptimization.en.md)
* [`LeastSquares`](https://reference.wolfram.com/language/ref/LeastSquares.en.md)
* [`BayesianMinimization`](https://reference.wolfram.com/language/ref/BayesianMinimization.en.md)
* [`RegionDistance`](https://reference.wolfram.com/language/ref/RegionDistance.en.md)
* [`SpherePoints`](https://reference.wolfram.com/language/ref/SpherePoints.en.md)
* [`NetTrain`](https://reference.wolfram.com/language/ref/NetTrain.en.md)

## Tech Notes

* [Numerical Mathematics: Basic Operations](https://reference.wolfram.com/language/tutorial/NumericalCalculations.en.md)
* [Numerical Optimization](https://reference.wolfram.com/language/tutorial/NumericalOperationsOnFunctions.en.md#24524)
* [Numerical Nonlinear Global Optimization](https://reference.wolfram.com/language/tutorial/ConstrainedOptimizationGlobalNumerical.en.md)
* [Constrained Optimization](https://reference.wolfram.com/language/tutorial/ConstrainedOptimizationOverview.en.md)
* [Unconstrained Optimization](https://reference.wolfram.com/language/tutorial/UnconstrainedOptimizationOverview.en.md)
* [Implementation notes: Numerical and Related Functions](https://reference.wolfram.com/language/tutorial/SomeNotesOnInternalImplementation.en.md#10453)

## Related Guides

* [`Optimization`](https://reference.wolfram.com/language/guide/Optimization.en.md)
* [Symbolic Vectors, Matrices and Arrays](https://reference.wolfram.com/language/guide/SymbolicArrays.en.md)
* [Convex Optimization](https://reference.wolfram.com/language/guide/ConvexOptimization.en.md)

## History

* [Introduced in 2008 (7.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn70.en.md) \| [Updated in 2014 (10.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn100.en.md) ▪ [2021 (12.3)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn123.en.md) ▪ [2021 (13.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn130.en.md) ▪ [2022 (13.2)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn132.en.md) ▪ [2024 (14.1)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn141.en.md)