ConvexOptimization
✖
ConvexOptimization
finds values of variables vars that minimize the convex objective function f subject to convex constraints cons.
Details and Options




- Convex optimization is global nonlinear optimization for convex functions with convex constraints. For convex problems, the global solution can be found.
- Convex optimization includes many other forms of optimization, including linear optimization, linear-fractional optimization, quadratic optimization, second-order cone optimization, semidefinite optimization and conic optimization.
- If
is concave, ConvexOptimization[-g,cons,vars] will maximize g.
- Convex optimization finds
that solves the following problem:
-
minimize subject to constraints where - Equality constraints of the form
may be included in cons.
- Mixed-integer convex optimization finds
and
that solve the problem:
-
minimize subject to constraints where - When the objective function is real valued, ConvexOptimization solves problems with
by internally converting to real variables
, where
and
.
- The variable specification vars should be a list with elements giving variables in one of the following forms:
-
v variable with name and dimensions inferred
v∈Reals real scalar variable v∈Integers integer scalar variable v∈Complexes complex scalar variable v∈ℛ vector variable restricted to the geometric region v∈Vectors[n,dom] vector variable in ,
or
v∈Matrices[{m,n},dom] matrix variable in ,
or
- ConvexOptimization automatically does transformations necessary to find an efficient method to solve the minimization problem.
- The primal minimization problem as solved has a related maximization problem that is the Lagrangian dual problem. The dual maximum value is always less than or equal to the primal minimum value, so it provides a lower bound. The dual maximizer provides information about the primal problem, including sensitivity of the minimum value to changes in the constraints.
- The possible solution properties "prop" include:
-
"PrimalMinimizer" a list of variable values that minimizes "PrimalMinimizerRules" values for the variables vars={v1,…} that minimize "PrimalMinimizerVector" the vector that minimizes "PrimalMinimumValue" the minimum value "DualMaximizer" the vectors that maximize the dual problem "DualMaximumValue" the dual maximum value "DualityGap" the difference between the dual and primal optimal values "Slack" vectors that convert inequality constraints to equality {"prop1","prop2",…} several solution properties - The following options may be given:
-
MaxIterations Automatic maximum number of iterations to use Method Automatic the method to use PerformanceGoal $PerformanceGoal aspects of performance to try to optimize Tolerance Automatic the tolerance to use for internal comparisons WorkingPrecision MachinePrecision precision to use in internal computations - The option Methodmethod may be used to specify the method to use. Available methods include:
-
Automatic choose the method automatically solver transform the problem, if possible, to use solver to solve the problem "SCS" SCS splitting conic solver "CSDP" CSDP semidefinite optimization solver "DSDP" DSDP semidefinite optimization solver "MOSEK" commercial MOSEK convex optimization solver "Gurobi" commercial Gurobi linear and quadratic optimization solver "Xpress" commercial Xpress linear and quadratic optimization solver - Methodsolver may be used to specify that a particular solver is used so that the dual formulation used corresponds to the formulation documented for solver. Possible solvers are LinearOptimization, LinearFractionalOptimization, QuadraticOptimization, SecondOrderConeOptimization, SemidefiniteOptimization, ConicOptimization and GeometricOptimization.
Examples
open allclose allBasic Examples (2)Summary of the most common use cases
Scope (28)Survey of the scope of standard use cases
Basic Uses (12)
Minimize subject to the constraints
and
:

https://wolfram.com/xid/0dcz7l4qtz8i-fo97gu

Several linear inequality constraints can be expressed with VectorGreaterEqual:

https://wolfram.com/xid/0dcz7l4qtz8i-h02qvb

Use v>=
or \[VectorGreaterEqual] to enter the vector inequality sign :

https://wolfram.com/xid/0dcz7l4qtz8i-p8fxz5

An equivalent form using scalar inequalities:

https://wolfram.com/xid/0dcz7l4qtz8i-hs91sh


https://wolfram.com/xid/0dcz7l4qtz8i-f9fqjq

The inequality may not be the same as
due to possible threading in
:

https://wolfram.com/xid/0dcz7l4qtz8i-hdkys4


https://wolfram.com/xid/0dcz7l4qtz8i-d6veff

To avoid unintended threading in , use Inactive[Plus]:

https://wolfram.com/xid/0dcz7l4qtz8i-dkn4r2

Use constant parameter equations to avoid unintended threading in :

https://wolfram.com/xid/0dcz7l4qtz8i-jsqk1f

https://wolfram.com/xid/0dcz7l4qtz8i-mhl6dh

VectorGreaterEqual represents a conic inequality with respect to the "NonNegativeCone":

https://wolfram.com/xid/0dcz7l4qtz8i-ku7v9e

To explicitly specify the dimension of the cone, use {"NonNegativeCone",n}:

https://wolfram.com/xid/0dcz7l4qtz8i-be5yp1


https://wolfram.com/xid/0dcz7l4qtz8i-5y4b0

Minimize subject to the constraint
:

https://wolfram.com/xid/0dcz7l4qtz8i-giwve

Specify the constraint using a conic inequality with "NormCone":

https://wolfram.com/xid/0dcz7l4qtz8i-hsxeub


https://wolfram.com/xid/0dcz7l4qtz8i-ixp48

Minimize subject to the positive semidefinite matrix constraint
:

https://wolfram.com/xid/0dcz7l4qtz8i-950bl


https://wolfram.com/xid/0dcz7l4qtz8i-c3htk2

Use a vector variable and Indexed[x,i] to specify individual components:

https://wolfram.com/xid/0dcz7l4qtz8i-blyqmo

Use Vectors[n,Reals] to specify the dimension of a vector variable when it may be ambiguous:

https://wolfram.com/xid/0dcz7l4qtz8i-2gn6p



https://wolfram.com/xid/0dcz7l4qtz8i-bnr68z

Specify non-negative constraints using NonNegativeReals ():

https://wolfram.com/xid/0dcz7l4qtz8i-7hh8v

An equivalent form using vector inequality :

https://wolfram.com/xid/0dcz7l4qtz8i-cc6orq

Maximize the area of a rectangle with perimeter at most 1 and height at most half the width:

https://wolfram.com/xid/0dcz7l4qtz8i-f00fus


When and
are positive, the problem can be solved by GeometricOptimization methods:

https://wolfram.com/xid/0dcz7l4qtz8i-u870o

Using method GeometricOptimization implicitly assumes positivity:

https://wolfram.com/xid/0dcz7l4qtz8i-go0zwy

Integer Variables (4)
Specify integer variables using Integers:

https://wolfram.com/xid/0dcz7l4qtz8i-otipab

Specify integer domain constraints on vector variables using Vectors[n,Integers]:

https://wolfram.com/xid/0dcz7l4qtz8i-hfsdi1

Specify non-negative integer domain constraints using NonNegativeIntegers ():

https://wolfram.com/xid/0dcz7l4qtz8i-dox9ud

Specify non-positive integer domain constraints using NonPositiveIntegers ():

https://wolfram.com/xid/0dcz7l4qtz8i-d0jwnq

Complex Variables (8)
Specify complex variables using Complexes:

https://wolfram.com/xid/0dcz7l4qtz8i-5o1ku

Minimize a real objective with complex variables and complex constraints
:

https://wolfram.com/xid/0dcz7l4qtz8i-cm37qt

Let . Expanding out the constraints into real components gives:

https://wolfram.com/xid/0dcz7l4qtz8i-dew3xk


https://wolfram.com/xid/0dcz7l4qtz8i-dktxeb

https://wolfram.com/xid/0dcz7l4qtz8i-nespyn

Solve the problem with real-valued objective and complex variables and constraints:

https://wolfram.com/xid/0dcz7l4qtz8i-ehropo

Solve the same problem with real variables and constraints:

https://wolfram.com/xid/0dcz7l4qtz8i-g3y54n

Use a quadratic objective with Hermitian matrix
and real-valued variables:

https://wolfram.com/xid/0dcz7l4qtz8i-ffbsnf

Use objective (1/2)Inactive[Dot][Conjugate[x],q,x] with a Hermitian matrix and complex variables:

https://wolfram.com/xid/0dcz7l4qtz8i-b753at

Use a quadratic constraint with Hermitian matrix
and real-valued variables:

https://wolfram.com/xid/0dcz7l4qtz8i-iopcnw

Use constraint (1/2)Inactive[Dot][Conjugate[x],q,x]d with a Hermitian matrix and complex variables:

https://wolfram.com/xid/0dcz7l4qtz8i-ctjwv2

Find the Hermitian matrix with minimum 2-norm (largest singular value) such that the matrix is positive semidefinite:

https://wolfram.com/xid/0dcz7l4qtz8i-fb9puk

The minimum for the largest singular value is:

https://wolfram.com/xid/0dcz7l4qtz8i-d5kav

Use a linear matrix inequality constraint with Hermitian or real symmetric matrices:

https://wolfram.com/xid/0dcz7l4qtz8i-1gsvl
The variables in linear matrix inequalities need to be real for the sum to remain Hermitian:

https://wolfram.com/xid/0dcz7l4qtz8i-c15nwi

Primal Model Properties (1)
Minimize over the intersection of a triangle
and a disk
:

https://wolfram.com/xid/0dcz7l4qtz8i-k0n6zq

https://wolfram.com/xid/0dcz7l4qtz8i-cgj2sa

Get the primal minimizer as a vector:

https://wolfram.com/xid/0dcz7l4qtz8i-bo4eb7


https://wolfram.com/xid/0dcz7l4qtz8i-ltg9mo


https://wolfram.com/xid/0dcz7l4qtz8i-cs0dy5

Dual Model Properties (3)

https://wolfram.com/xid/0dcz7l4qtz8i-gek79l

https://wolfram.com/xid/0dcz7l4qtz8i-yqer6

The dual problem is to maximize subject to
:

https://wolfram.com/xid/0dcz7l4qtz8i-fobwz9

The primal minimum value and the dual maximum value coincide because of strong duality:

https://wolfram.com/xid/0dcz7l4qtz8i-f5hvud

That is the same as having a duality gap of zero. In general, at optimal points:

https://wolfram.com/xid/0dcz7l4qtz8i-w7chou

Get the dual maximum value and dual maximizer directly using solution properties:

https://wolfram.com/xid/0dcz7l4qtz8i-c41cmg


https://wolfram.com/xid/0dcz7l4qtz8i-bfgnnt

The "DualMaximizer" can be obtained with:

https://wolfram.com/xid/0dcz7l4qtz8i-bp0963

The dual maximizer vector partitions match the number and dimensions of the dual cones:

https://wolfram.com/xid/0dcz7l4qtz8i-bthcm2

To get the dual format for a particular problem-type solver, specify it as a method option:

https://wolfram.com/xid/0dcz7l4qtz8i-c7q1y4


https://wolfram.com/xid/0dcz7l4qtz8i-e13sjq

Options (13)Common values & functionality for each option
Method (8)
"SCS" is a splitting conic solver method:

https://wolfram.com/xid/0dcz7l4qtz8i-zah0y3

"CSDP" is an interior point method for semidefinite problems:

https://wolfram.com/xid/0dcz7l4qtz8i-d6m06w

"DSDP" is an alternative interior point method for semidefinite problems:

https://wolfram.com/xid/0dcz7l4qtz8i-f2x17u

"IPOPT" is an interior point method for nonlinear problems:

https://wolfram.com/xid/0dcz7l4qtz8i-f5btsf

Different methods have different default tolerances, which affects the accuracy and precision:

https://wolfram.com/xid/0dcz7l4qtz8i-73rp7
Compute exact and approximate solutions:

https://wolfram.com/xid/0dcz7l4qtz8i-ck8mlf

https://wolfram.com/xid/0dcz7l4qtz8i-fltgk5

https://wolfram.com/xid/0dcz7l4qtz8i-drob0i

https://wolfram.com/xid/0dcz7l4qtz8i-bmlerf

https://wolfram.com/xid/0dcz7l4qtz8i-e25su7
"SCS" has a default tolerance of :

https://wolfram.com/xid/0dcz7l4qtz8i-70r9f

"CSDP", "DSDP" and "IPOPT" have default tolerances of :

https://wolfram.com/xid/0dcz7l4qtz8i-bejajo


https://wolfram.com/xid/0dcz7l4qtz8i-ugib


https://wolfram.com/xid/0dcz7l4qtz8i-bitksp

When method "SCS" is specified, it is called with the SCS library default tolerance of 10-3:

https://wolfram.com/xid/0dcz7l4qtz8i-hekt3l

With default options, this problem is solved by method "SCS" with tolerance 10-6:

https://wolfram.com/xid/0dcz7l4qtz8i-ei1f1


https://wolfram.com/xid/0dcz7l4qtz8i-h5vmmf

Use methods "CSDP" or "DSDP" for constraints that are converted to semidefinite constraints:

https://wolfram.com/xid/0dcz7l4qtz8i-v4mgs

https://wolfram.com/xid/0dcz7l4qtz8i-c32gq5

https://wolfram.com/xid/0dcz7l4qtz8i-kdi4ev
Solve the problem using method "CSDP":

https://wolfram.com/xid/0dcz7l4qtz8i-2s4oil

Solve the problem using method "DSDP":

https://wolfram.com/xid/0dcz7l4qtz8i-be9hsi

Use method "IPOPT" to obtain accurate solutions when "CSDP" and "DSDP" are not applicable:

https://wolfram.com/xid/0dcz7l4qtz8i-es1m2h

https://wolfram.com/xid/0dcz7l4qtz8i-fbxo8
"IPOPT" produces more accurate results than "SCS", but is typically much slower:

https://wolfram.com/xid/0dcz7l4qtz8i-jvpj3v

Compare timing with method "SCS":

https://wolfram.com/xid/0dcz7l4qtz8i-34lvbn

PerformanceGoal (1)
The default value of the option PerformanceGoal is $PerformanceGoal:

https://wolfram.com/xid/0dcz7l4qtz8i-ehn5oy

Use PerformanceGoal"Quality" to get a more accurate result:

https://wolfram.com/xid/0dcz7l4qtz8i-dgovrf


https://wolfram.com/xid/0dcz7l4qtz8i-hqvof9

Use PerformanceGoal"Speed" to get a result faster, but at the cost of quality:

https://wolfram.com/xid/0dcz7l4qtz8i-bxnhor

https://wolfram.com/xid/0dcz7l4qtz8i-gibk5q


https://wolfram.com/xid/0dcz7l4qtz8i-i5or46

The "Speed" goal gives a less accurate result:

https://wolfram.com/xid/0dcz7l4qtz8i-ev4vd2

Tolerance (2)
A smaller Tolerance setting gives a more precise result:

https://wolfram.com/xid/0dcz7l4qtz8i-ed5tm
Compute the exact minimum value with Minimize:

https://wolfram.com/xid/0dcz7l4qtz8i-fg7x7y

Compute the error in the minimum value with different Tolerance settings:

https://wolfram.com/xid/0dcz7l4qtz8i-nrbx9s

Visualize the change in minimum value error with respect to tolerance:

https://wolfram.com/xid/0dcz7l4qtz8i-lrb7st

A smaller Tolerance setting gives a more precise answer, but may take longer to compute:

https://wolfram.com/xid/0dcz7l4qtz8i-ce2zt8

https://wolfram.com/xid/0dcz7l4qtz8i-bs1gyt


https://wolfram.com/xid/0dcz7l4qtz8i-lzaad9

The tighter tolerance gives a more precise answer:

https://wolfram.com/xid/0dcz7l4qtz8i-4ge5q0

WorkingPrecision (2)
The default working precision is MachinePrecision:

https://wolfram.com/xid/0dcz7l4qtz8i-d5pae

Using WorkingPrecisionInfinity will give an exact solution if possible:

https://wolfram.com/xid/0dcz7l4qtz8i-ba737c

WorkingPrecision other than MachinePrecision and ∞ will try to use a method with extended precision support:

https://wolfram.com/xid/0dcz7l4qtz8i-duxrvu

Using WorkingPrecisionAutomatic will try to use the precision of the input problem:

https://wolfram.com/xid/0dcz7l4qtz8i-gem90h


https://wolfram.com/xid/0dcz7l4qtz8i-enbzty

Solve a problem with a quadratic objective using 24-digit precision:

https://wolfram.com/xid/0dcz7l4qtz8i-c238vl

There is currently no method that solves problems with quadratic objectives using exact arithmetic. When the requested precision is not supported, the computation uses machine numbers:

https://wolfram.com/xid/0dcz7l4qtz8i-kmlez4


Applications (30)Sample problems that can be solved with this function
Basic Modeling Transformations (11)
Maximize subject to
. Solve a maximization problem by negating the objective function:

https://wolfram.com/xid/0dcz7l4qtz8i-7aogom

Negate the primal minimum value to get the corresponding maximal value:

https://wolfram.com/xid/0dcz7l4qtz8i-9hulab

Minimize subject to
. Since the constraint
is not convex, use a semidefinite constraint to make the convexity explicit:

https://wolfram.com/xid/0dcz7l4qtz8i-pe99r

A matrix is positive semidefinite if and only if the determinants of all upper-left submatrices are non-negative:

https://wolfram.com/xid/0dcz7l4qtz8i-xe08f


https://wolfram.com/xid/0dcz7l4qtz8i-92z2a

Minimize subject to
, assuming
when
. Using the auxiliary variable
, the objective is to minimize
such that
:

https://wolfram.com/xid/0dcz7l4qtz8i-f9iak7

https://wolfram.com/xid/0dcz7l4qtz8i-pxo8p

A Schur complement condition says that if , a block matrix
iff
. Therefore,
iff
. Use Inactive[Plus] for constructing the constraints to avoid threading:

https://wolfram.com/xid/0dcz7l4qtz8i-ctx6t

Minimize over an ellipse centered at
:

https://wolfram.com/xid/0dcz7l4qtz8i-e1u82u

The epigraph transformation can be used to construct a problem with a linear objective and additional variable and constraint:

https://wolfram.com/xid/0dcz7l4qtz8i-ccn1bg

In this form, the problem can be solved directly with ConicOptimization:

https://wolfram.com/xid/0dcz7l4qtz8i-fk0hae

Minimize , where
is a nondecreasing function, by instead minimizing
. The primal minimizer
will remain the same for both problems. Consider minimizing
subject to
:

https://wolfram.com/xid/0dcz7l4qtz8i-l7wv9y

https://wolfram.com/xid/0dcz7l4qtz8i-llr50x

The minimum value for can be obtained by applying
to the minimum value of
:

https://wolfram.com/xid/0dcz7l4qtz8i-fztpat

ConvexOptimization will automatically do this transformation:

https://wolfram.com/xid/0dcz7l4qtz8i-h8afse

Find that minimizes the largest eigenvalue of a symmetric matrix that depends linearly on the decision variables
,
. The problem can be formulated as a linear matrix inequality since
is equivalent to
, where
is the
eigenvalue of
. Define the linear matrix function
:

https://wolfram.com/xid/0dcz7l4qtz8i-gijetd

https://wolfram.com/xid/0dcz7l4qtz8i-cfsdrn

A real symmetric matrix can be diagonalized with an orthogonal matrix
so
. Hence
iff
. Since any
, taking
,
, hence
iff
. Numerically simulate to show that these formulations are equivalent:

https://wolfram.com/xid/0dcz7l4qtz8i-bxagk3


https://wolfram.com/xid/0dcz7l4qtz8i-l9zvaw

https://wolfram.com/xid/0dcz7l4qtz8i-bai8wg


https://wolfram.com/xid/0dcz7l4qtz8i-4xe7n

Run a Monte Carlo simulation to check the plausibility of the result:

https://wolfram.com/xid/0dcz7l4qtz8i-c3z8kg

Find that maximizes the smallest eigenvalue of a symmetric matrix
that depends linearly on the decision variables
. Define the linear matrix function
:

https://wolfram.com/xid/0dcz7l4qtz8i-bzpzqv
The problem can be formulated as linear matrix inequality, since is equivalent to
where
is the
eigenvalue of
. To maximize
, minimize
:

https://wolfram.com/xid/0dcz7l4qtz8i-drcmis

Run a Monte Carlo simulation to check the plausibility of the result:

https://wolfram.com/xid/0dcz7l4qtz8i-fa3jc1

Find that minimizes the difference between the largest and the smallest eigenvalues of a symmetric matrix
that depends linearly on the decision variables
. Define the linear matrix function
:

https://wolfram.com/xid/0dcz7l4qtz8i-hfi7wb
The problem can be formulated as a linear matrix inequality, since is equivalent to
, where
is the
eigenvalue of
. Solve the resulting problem:

https://wolfram.com/xid/0dcz7l4qtz8i-59z3j8

In this case, the minimum and maximum eigenvalues coincide and the difference is 0:

https://wolfram.com/xid/0dcz7l4qtz8i-bjmjxk

Minimize the largest (by absolute value) eigenvalue of a symmetric matrix that depends linearly on the decision variables
:

https://wolfram.com/xid/0dcz7l4qtz8i-busck0
The largest eigenvalue satisfies The largest (by absolute value) negative eigenvalue of
is the largest eigenvalue of
and satisfies
:

https://wolfram.com/xid/0dcz7l4qtz8i-okb17


https://wolfram.com/xid/0dcz7l4qtz8i-czkh9j

Find that minimizes the largest singular value
of a symmetric matrix
that depends linearly on the decision variables
:

https://wolfram.com/xid/0dcz7l4qtz8i-hxn206
The largest singular value of
is the square root of the largest eigenvalue of
, and from a preceding example it satisfies
, or equivalently
:

https://wolfram.com/xid/0dcz7l4qtz8i-5kdhnd


https://wolfram.com/xid/0dcz7l4qtz8i-jymvm2

For quadratic sets , which include ellipsoids, quadratic cones and paraboloids, determine whether
, where
are symmetric matrices,
are vectors and
scalars:

https://wolfram.com/xid/0dcz7l4qtz8i-lahmkn

https://wolfram.com/xid/0dcz7l4qtz8i-cyznvq

Assuming that the sets are full dimensional, the S-procedure says that
iff there exists some non-negative number
such that
Visually see that there exists a non-negative
:

https://wolfram.com/xid/0dcz7l4qtz8i-tvxzei

Use 0 for an objective function since feasibility is a concern. Since λ≥0, it follows that :

https://wolfram.com/xid/0dcz7l4qtz8i-b7jg0b

Geometry Problems (8)
Minimize the length of the diagonal of a rectangle of area 4 such that the width plus three times the height is less than 7:

https://wolfram.com/xid/0dcz7l4qtz8i-lmf0sv

Find the minimum distance between two disks of radius 1 centered at and
. Let
be a point on disk 1. Let
be a point on disk 2. The objective is to minimize
subject to constraints
:

https://wolfram.com/xid/0dcz7l4qtz8i-x0rc5a

Visualize the positions of the two points:

https://wolfram.com/xid/0dcz7l4qtz8i-u44093

The distance between the points is:

https://wolfram.com/xid/0dcz7l4qtz8i-p9wkco

Find the half-lengths of the principal axes that maximize the volume of an ellipsoid with a surface area of at most 1:

https://wolfram.com/xid/0dcz7l4qtz8i-d31gm2
The surface area can be approximated by:

https://wolfram.com/xid/0dcz7l4qtz8i-f3cmkb
Maximize the volume area by minimizing its reciprocal:

https://wolfram.com/xid/0dcz7l4qtz8i-cye8aq

This is the sphere. Including additional constraints on the axes lengths changes this:

https://wolfram.com/xid/0dcz7l4qtz8i-b3qp7d


https://wolfram.com/xid/0dcz7l4qtz8i-v24f6

Find the radius and center
of a minimal enclosing ball that encompasses a given region:

https://wolfram.com/xid/0dcz7l4qtz8i-1xi3kb
Minimize the radius subject to the constraints
:

https://wolfram.com/xid/0dcz7l4qtz8i-fmr9yi

https://wolfram.com/xid/0dcz7l4qtz8i-van10s


https://wolfram.com/xid/0dcz7l4qtz8i-uhd7v9

The minimal enclosing ball can be found efficiently using BoundingRegion:

https://wolfram.com/xid/0dcz7l4qtz8i-1dvvd5

Find the analytic center of a convex polygon. The analytic center is a point that maximizes the product of distances to the constraints:

https://wolfram.com/xid/0dcz7l4qtz8i-b4h1vk
Each segment of the convex polygon can be represented as intersections of half-planes . Extract the linear inequalities:

https://wolfram.com/xid/0dcz7l4qtz8i-pfce4d
The objective is to maximize . Taking
and negating the objective, the transformed objective is
:

https://wolfram.com/xid/0dcz7l4qtz8i-8uh8pi
Using auxiliary variable , the transformed objective is
subject to the constraint
:

https://wolfram.com/xid/0dcz7l4qtz8i-i6xstb

Visualize the location of the center:

https://wolfram.com/xid/0dcz7l4qtz8i-o16bx1

Test whether an ellipsoid is a subset of another ellipsoid of the form :

https://wolfram.com/xid/0dcz7l4qtz8i-wthokb
Using the S-procedure, it can be shown that ellipse 2 is a subset of ellipse 1 iff :

https://wolfram.com/xid/0dcz7l4qtz8i-kbhlhz
Check if the condition is satisfied:

https://wolfram.com/xid/0dcz7l4qtz8i-rjwbyg

Convert the ellipsoids into explicit form and confirm that ellipse 2 is within ellipse 1:

https://wolfram.com/xid/0dcz7l4qtz8i-cifcl8

https://wolfram.com/xid/0dcz7l4qtz8i-298g57

Move ellipsoid 2 such that it overlaps with ellipsoid 1:

https://wolfram.com/xid/0dcz7l4qtz8i-23sh7h
A test now shows that the problem is infeasible, indicating that ellipsoid 2 is not a subset of ellipsoid 1:

https://wolfram.com/xid/0dcz7l4qtz8i-jmu9yu


https://wolfram.com/xid/0dcz7l4qtz8i-6s3luj

Find the maximum-area ellipse parametrized as that can be fitted into a convex polygon:

https://wolfram.com/xid/0dcz7l4qtz8i-3tg2i1
Each segment of the convex polygon can be represented as intersections of half-planes . Extract the linear inequalities:

https://wolfram.com/xid/0dcz7l4qtz8i-vq06yg
Applying the parametrization to the half-planes gives . The term
. Thus, the constraints are
:

https://wolfram.com/xid/0dcz7l4qtz8i-8qvwem
Minimizing the area is equivalent to minimizing , which is equivalent to minimizing
:

https://wolfram.com/xid/0dcz7l4qtz8i-ixiqu4

Convert the parametrized ellipse into the explicit form as :

https://wolfram.com/xid/0dcz7l4qtz8i-ndd737


https://wolfram.com/xid/0dcz7l4qtz8i-uhg6ug

Find the smallest ellipsoid parametrized as that encompasses a set of points in 3D by minimizing the volume:

https://wolfram.com/xid/0dcz7l4qtz8i-pjykdi
For each point , the constraint
must be satisfied:

https://wolfram.com/xid/0dcz7l4qtz8i-nc9kzf
Minimizing the volume is equivalent to minimizing , which is equivalent to minimizing
:

https://wolfram.com/xid/0dcz7l4qtz8i-kshbnc

Convert the parametrized ellipse into the explicit form :

https://wolfram.com/xid/0dcz7l4qtz8i-ez4q1q


https://wolfram.com/xid/0dcz7l4qtz8i-48kw5a

A bounding ellipsoid, not necessarily minimum volume, can also be found using BoundingRegion:

https://wolfram.com/xid/0dcz7l4qtz8i-7j1lcp


https://wolfram.com/xid/0dcz7l4qtz8i-e4z5ec

Data-Fitting Problems (4)
Minimize subject to the constraints
for a given matrix a and vector b:

https://wolfram.com/xid/0dcz7l4qtz8i-sqawoj

https://wolfram.com/xid/0dcz7l4qtz8i-cen26w

Fit a cubic curve to discrete data such that the first and last points of the data lie on the curve:

https://wolfram.com/xid/0dcz7l4qtz8i-znbwtk
Construct the matrix using DesignMatrix:

https://wolfram.com/xid/0dcz7l4qtz8i-7u326f
Define the constraint so that the first and last points must lie on the curve:

https://wolfram.com/xid/0dcz7l4qtz8i-6h1xyq
Find the coefficients by minimizing
:

https://wolfram.com/xid/0dcz7l4qtz8i-4jv0wc


https://wolfram.com/xid/0dcz7l4qtz8i-elazro

Find a fit less sensitive to outliers to nonlinear discrete data by minimizing :

https://wolfram.com/xid/0dcz7l4qtz8i-5edia8
Fit the data using the bases . The interpolating function will be
:

https://wolfram.com/xid/0dcz7l4qtz8i-0el18p

https://wolfram.com/xid/0dcz7l4qtz8i-b64kk9


https://wolfram.com/xid/0dcz7l4qtz8i-smzt60

Compare the interpolating function with the reference function:

https://wolfram.com/xid/0dcz7l4qtz8i-c1na99

Find an regularized fit to complex data by minimizing
for a complex
:

https://wolfram.com/xid/0dcz7l4qtz8i-cyinhk
Construct the matrix using DesignMatrix, for the basis
:

https://wolfram.com/xid/0dcz7l4qtz8i-kn9a90

https://wolfram.com/xid/0dcz7l4qtz8i-einq6s

Let be the fit defined as a function of the real and imaginary components of
:

https://wolfram.com/xid/0dcz7l4qtz8i-od26r
Visualize the result for the real component of :

https://wolfram.com/xid/0dcz7l4qtz8i-i8tg0b

Visualize the results for the imaginary component of :

https://wolfram.com/xid/0dcz7l4qtz8i-f8d31g

Sum-of-Squares Representation (1)
Represent a given polynomial in terms of the sum-of-squares polynomial
:

https://wolfram.com/xid/0dcz7l4qtz8i-38ildp
The objective is to find such that
, where
is a vector of monomials:

https://wolfram.com/xid/0dcz7l4qtz8i-sdlyvv
Construct the symmetric matrix :

https://wolfram.com/xid/0dcz7l4qtz8i-g4rofr
Find the polynomial coefficients of and
and make sure they are equal:

https://wolfram.com/xid/0dcz7l4qtz8i-5u259f


https://wolfram.com/xid/0dcz7l4qtz8i-n40txk

The quadratic term , where
is a lower-triangular matrix obtained from the Cholesky decomposition of
:

https://wolfram.com/xid/0dcz7l4qtz8i-82j1q

Compare the sum-of-squares polynomial to the given polynomial:

https://wolfram.com/xid/0dcz7l4qtz8i-1s4mjs

Classification Problems (3)
Find a line that separates two groups of points
and
:

https://wolfram.com/xid/0dcz7l4qtz8i-mq8mpw
For separation, set 1 must satisfy and set 2 must satisfy
:

https://wolfram.com/xid/0dcz7l4qtz8i-wz5f21
The objective is to minimize , which gives twice the thickness between
and
:

https://wolfram.com/xid/0dcz7l4qtz8i-huxo4m


https://wolfram.com/xid/0dcz7l4qtz8i-mm9axc


https://wolfram.com/xid/0dcz7l4qtz8i-qddxuk

Find a quadratic polynomial that separates two groups of 3D points and
:

https://wolfram.com/xid/0dcz7l4qtz8i-0kz712

Construct the quadratic polynomial data matrices for the two sets using DesignMatrix:

https://wolfram.com/xid/0dcz7l4qtz8i-r437k9
For separation, set 1 must satisfy and set 2 must satisfy
:

https://wolfram.com/xid/0dcz7l4qtz8i-602x47
Find the separating polynomial by minimizing :

https://wolfram.com/xid/0dcz7l4qtz8i-iwk3o

The polynomial separating the two groups of points is:

https://wolfram.com/xid/0dcz7l4qtz8i-1vrepp

Plot the polynomial separating the two datasets:

https://wolfram.com/xid/0dcz7l4qtz8i-v2nioo

Separate a given set of points into different groups. This is done by finding the centers
for each group by minimizing
, where
is a given local kernel and
is a given penalty parameter:

https://wolfram.com/xid/0dcz7l4qtz8i-faena0
The kernel is a
-nearest neighbor (
) function such that
, else
. For this problem,
nearest neighbors are selected:

https://wolfram.com/xid/0dcz7l4qtz8i-g2rd9z

https://wolfram.com/xid/0dcz7l4qtz8i-he6ucu

https://wolfram.com/xid/0dcz7l4qtz8i-kcaa0u
For each data point, there exists a corresponding center. Data belonging to the same group will have the same center value:

https://wolfram.com/xid/0dcz7l4qtz8i-dl9y86

Extract and plot the grouped points:

https://wolfram.com/xid/0dcz7l4qtz8i-feehz1

Facility Location Problems (1)
Find the positions of various cell towers and the range
needed to serve clients located at
:

https://wolfram.com/xid/0dcz7l4qtz8i-432sb2
Each cell tower consumes power proportional to its range, which is given by . The objective is to minimize the power consumption:

https://wolfram.com/xid/0dcz7l4qtz8i-qnmsp6
Let be a decision variable indicating that
if client
is covered by cell tower
:

https://wolfram.com/xid/0dcz7l4qtz8i-yup7tp
Each cell tower must be located such that its range covers some of the clients:

https://wolfram.com/xid/0dcz7l4qtz8i-nw7y9g
Each cell tower can cover multiple clients:

https://wolfram.com/xid/0dcz7l4qtz8i-nb1u4e
Each cell tower has a minimum and maximum coverage:

https://wolfram.com/xid/0dcz7l4qtz8i-lbqs1t

https://wolfram.com/xid/0dcz7l4qtz8i-kz0tgg
Find the cell tower positions and their ranges:

https://wolfram.com/xid/0dcz7l4qtz8i-d9gi87

Extract cell tower position and range:

https://wolfram.com/xid/0dcz7l4qtz8i-yangc4
Visualize the positions and ranges of the towers with respect to client locations:

https://wolfram.com/xid/0dcz7l4qtz8i-7bdoho

Portfolio Optimization (1)
Find the distribution of capital to invest in six stocks to maximize return while minimizing risk:

https://wolfram.com/xid/0dcz7l4qtz8i-kut0cq
The return is given by , where
is a vector of expected return value of each individual stock:

https://wolfram.com/xid/0dcz7l4qtz8i-2z1lpt
The risk is given by ;
is a risk-aversion parameter and
:

https://wolfram.com/xid/0dcz7l4qtz8i-get9k6
The objective is to maximize return while minimizing risk for a specified risk-aversion parameter:

https://wolfram.com/xid/0dcz7l4qtz8i-61n2dz
The effect on market prices of stocks due to the buying and selling of stocks is modeled by , which is modeled by a power cone using the epigraph transformation:

https://wolfram.com/xid/0dcz7l4qtz8i-1k0oif
The weights must all be greater than 0 and the weights plus market impact costs must add to 1:

https://wolfram.com/xid/0dcz7l4qtz8i-vgdxhc
Compute the returns and corresponding risk for a range of risk-aversion parameters:

https://wolfram.com/xid/0dcz7l4qtz8i-4zi5kn
The optimal over a range of
gives an upper-bound envelope on the tradeoff between return and risk:

https://wolfram.com/xid/0dcz7l4qtz8i-3deaei

Compute the weights for a specified number of risk-aversion parameters:

https://wolfram.com/xid/0dcz7l4qtz8i-scqccb
By accounting for the market costs, a diversified portfolio can be obtained for low risk aversion, but when the risk aversion is high, the market impact cost dominates, due to purchasing a less diversified stock:

https://wolfram.com/xid/0dcz7l4qtz8i-18xvc1

Image Processing (1)
Recover a corrupted image by finding an image that is closest under the total variation norm:

https://wolfram.com/xid/0dcz7l4qtz8i-0szo9

Create a corrupted image by randomly deleting 40% of the data points.

https://wolfram.com/xid/0dcz7l4qtz8i-b3v1nu

The objective is to minimize , where
is the image data:

https://wolfram.com/xid/0dcz7l4qtz8i-bmz6mt
Assume that any nonzero data points are uncorrupted. For these positions, set
:

https://wolfram.com/xid/0dcz7l4qtz8i-ga5ygg
Find the solution and show the restored image:

https://wolfram.com/xid/0dcz7l4qtz8i-nn9ycy

https://wolfram.com/xid/0dcz7l4qtz8i-kchhxt

Wolfram Research (2020), ConvexOptimization, Wolfram Language function, https://reference.wolfram.com/language/ref/ConvexOptimization.html.
Text
Wolfram Research (2020), ConvexOptimization, Wolfram Language function, https://reference.wolfram.com/language/ref/ConvexOptimization.html.
Wolfram Research (2020), ConvexOptimization, Wolfram Language function, https://reference.wolfram.com/language/ref/ConvexOptimization.html.
CMS
Wolfram Language. 2020. "ConvexOptimization." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/ConvexOptimization.html.
Wolfram Language. 2020. "ConvexOptimization." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/ConvexOptimization.html.
APA
Wolfram Language. (2020). ConvexOptimization. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/ConvexOptimization.html
Wolfram Language. (2020). ConvexOptimization. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/ConvexOptimization.html
BibTeX
@misc{reference.wolfram_2025_convexoptimization, author="Wolfram Research", title="{ConvexOptimization}", year="2020", howpublished="\url{https://reference.wolfram.com/language/ref/ConvexOptimization.html}", note=[Accessed: 19-May-2025
]}
BibLaTeX
@online{reference.wolfram_2025_convexoptimization, organization={Wolfram Research}, title={ConvexOptimization}, year={2020}, url={https://reference.wolfram.com/language/ref/ConvexOptimization.html}, note=[Accessed: 19-May-2025
]}