QuadraticOptimization
✖
QuadraticOptimization
finds values of variables vars that minimize the quadratic objective f subject to linear constraints cons.
finds a vector that minimizes the quadratic objective
subject to the linear inequality constraints
.
takes to be in the domain domi, where domi is Integers or Reals.
Details and Options




- Quadratic optimization is also known as quadratic programming (QP), mixed-integer quadratic programming (MIQP) or linearly constrained quadratic optimization.
- Quadratic optimization is typically used in problems such as parameter fitting, portfolio optimization and geometric distance problems.
- Quadratic optimization is a convex optimization problem that can be solved globally and efficiently with real, integer or complex variables.
- Quadratic optimization finds
that solves the primal problem: »
-
minimize subject to constraints where - The space
consists of symmetric positive semidefinite matrices.
- Mixed-integer quadratic optimization finds
and
that solve the problem:
-
minimize subject to constraints where - When the objective function is real valued, QuadraticOptimization solves problems with
by internally converting to real variables
, where
and
.
- The variable specification vars should be a list with elements giving variables in one of the following forms:
-
v variable with name and dimensions inferred
v∈Reals real scalar variable v∈Integers integer scalar variable v∈Complexes complex scalar variable v∈ℛ vector variable restricted to the geometric region v∈Vectors[n,dom] vector variable in or
v∈Matrices[{m,n},dom] matrix variable in or
- The constraints cons can be specified by:
-
LessEqual scalar inequality GreaterEqual scalar inequality VectorLessEqual vector inequality VectorGreaterEqual vector inequality Equal scalar or vector equality Element convex domain or region element - With QuadraticOptimization[f,cons,vars], parameter equations of the form parval, where par is not in vars and val is numerical or an array with numerical values, may be included in the constraints to define parameters used in f or cons.
- The objective function may be specified in the following ways:
-
q {q,c} - In the factored form,
and
.
- The primal minimization problem has a related maximization problem that is the Lagrangian dual problem. The dual maximum value is always less than or equal to the primal minimum value, so it provides a lower bound. The dual maximizer provides information about the primal problem, including sensitivity of the minimum value to changes in the constraints.
- The Lagrangian dual problem for quadratic optimization with objective
is given by: »
-
maximize subject to constraints where - With a factored quadratic objective
, the dual problem may also be expressed as:
-
maximize subject to constraints where - The relationship between the factored dual vector
and the unfactored dual vector
is
.
- For quadratic optimization, strong duality holds if
is positive semidefinite. This means that if there is a solution to the primal minimization problem, then there is a solution to the dual maximization problem, and the dual maximum value is equal to the primal minimum value.
- The possible solution properties "prop" include:
-
"PrimalMinimizer" a list of variable values that minimizes the objective function "PrimalMinimizerRules" values for the variables vars={v1,…} that minimizes "PrimalMinimizerVector" the vector that minimizes "PrimalMinimumValue" the minimum value "DualMaximizer" vectors that maximize "DualMaximumValue" the dual maximum value "DualityGap" the difference between the dual and primal optimal values (0 because of strong duality) "Slack" the constraint slack vector "ConstraintSensitivity" sensitivity of to constraint perturbations
"ObjectiveMatrix" the quadratic objective matrix "ObjectiveVector" the linear objective vector "FactoredObjectiveMatrix" the matrix in the factored objective form "FactoredObjectiveVector" the vector in the factored objective form "LinearInequalityConstraints" the linear inequality constraint matrix and vector "LinearEqualityConstraints" the linear equality constraint matrix and vector {"prop1","prop2",…} several solution properties - The dual maximizer component
is a function of
and
, given by
.
- The following options may be given:
-
MaxIterations Automatic maximum number of iterations to use Method Automatic the method to use PerformanceGoal $PerformanceGoal aspects of performance to try to optimize Tolerance Automatic the tolerance to use for internal comparisons WorkingPrecision MachinePrecision precision to use in internal computations - The option Method->method may be used to specify the method to use. Available methods include:
-
Automatic choose the method automatically "COIN" COIN quadratic programming solver "SCS" SCS splitting conic solver "OSQP" OSQP operator splitting solver for quadratic problems "CSDP" CSDP semidefinite optimization solver "DSDP" DSDP semidefinite optimization solver "PolyhedralApproximation" objective epigraph is approximated using polyhedra "MOSEK" commercial MOSEK convex optimization solver "Gurobi" commercial Gurobi linear and quadratic optimization solver "Xpress" commercial Xpress linear and quadratic optimization solver

Examples
open allclose allBasic Examples (3)Summary of the most common use cases
Minimize subject to the constraint
:

https://wolfram.com/xid/0j0i5s3wwuef1e-r8yog9

The optimal point lies in a region defined by the constraints and where is smallest:

https://wolfram.com/xid/0j0i5s3wwuef1e-ov3ue4

Minimize subject to the equality constraint
and the inequality constraints
:

https://wolfram.com/xid/0j0i5s3wwuef1e-7q1h2q

Define objective as and constraints as
and
:

https://wolfram.com/xid/0j0i5s3wwuef1e-hdwsyr
Solve using matrix-vector inputs:

https://wolfram.com/xid/0j0i5s3wwuef1e-qazqjl

The optimal point lies where a level curve of is tangent to the equality constraint:

https://wolfram.com/xid/0j0i5s3wwuef1e-11ofzu

Minimize subject to the constraint
:

https://wolfram.com/xid/0j0i5s3wwuef1e-c8skou

Use the equivalent matrix-vector representation:

https://wolfram.com/xid/0j0i5s3wwuef1e-1prapq

Scope (26)Survey of the scope of standard use cases
Basic Uses (8)
Minimize subject to the constraints
:

https://wolfram.com/xid/0j0i5s3wwuef1e-fo97gu

Get the minimizing value using solution property "PrimalMinimumValue":

https://wolfram.com/xid/0j0i5s3wwuef1e-db51xm

Minimize subject to the constraint
:

https://wolfram.com/xid/0j0i5s3wwuef1e-gncq3g

Get the minimizing value and minimizing vector using solution property:

https://wolfram.com/xid/0j0i5s3wwuef1e-1y51df

Minimize subject to the constraint
:

https://wolfram.com/xid/0j0i5s3wwuef1e-4sq4te

Define the objective as and constraints as
:

https://wolfram.com/xid/0j0i5s3wwuef1e-qo36ju
Solve using matrix-vector inputs:

https://wolfram.com/xid/0j0i5s3wwuef1e-2rgqst

Minimize subject to the equality constraint
and the inequality constraint
:

https://wolfram.com/xid/0j0i5s3wwuef1e-zwo3ll

Define objective as and constraints as
and
:

https://wolfram.com/xid/0j0i5s3wwuef1e-0rc1gi
Solve using matrix-vector inputs:

https://wolfram.com/xid/0j0i5s3wwuef1e-1l2kf5

Minimize subject to the constraints
:

https://wolfram.com/xid/0j0i5s3wwuef1e-yn7nj9

Define the objective as and constraints as
:

https://wolfram.com/xid/0j0i5s3wwuef1e-y3cm7z
Solve using matrix-vector inputs:

https://wolfram.com/xid/0j0i5s3wwuef1e-kcemhw

Minimize subject to the constraints
:

https://wolfram.com/xid/0j0i5s3wwuef1e-85lzkf

Specify constraints using VectorGreaterEqual () and VectorLessEqual ():

https://wolfram.com/xid/0j0i5s3wwuef1e-mevlkk

Minimize subject to
. Use a vector variable and constant parameter equations:

https://wolfram.com/xid/0j0i5s3wwuef1e-wwv0dw

https://wolfram.com/xid/0j0i5s3wwuef1e-oepozw

Minimize subject to
. Use NonNegativeReals to specify the constraint
:

https://wolfram.com/xid/0j0i5s3wwuef1e-2pmfa5

https://wolfram.com/xid/0j0i5s3wwuef1e-osgnne

Integer Variables (4)
Specify integer domain constraints using Integers:

https://wolfram.com/xid/0j0i5s3wwuef1e-j6re13

Specify integer domain constraints on vector variables using Vectors[n,Integers]:

https://wolfram.com/xid/0j0i5s3wwuef1e-6fd7mg

https://wolfram.com/xid/0j0i5s3wwuef1e-tgmixy

Specify non-negative integer domain constraints using NonNegativeIntegers ():

https://wolfram.com/xid/0j0i5s3wwuef1e-cv843v

https://wolfram.com/xid/0j0i5s3wwuef1e-is7wsk

Specify non-positive integer constraints using NonPositiveIntegers ():

https://wolfram.com/xid/0j0i5s3wwuef1e-6tecbj

https://wolfram.com/xid/0j0i5s3wwuef1e-wc57sf

Complex Variables (3)
Specify complex variables using Complexes:

https://wolfram.com/xid/0j0i5s3wwuef1e-hqkcf

Use a Hermitian matrix in the objective
with real-valued variables:

https://wolfram.com/xid/0j0i5s3wwuef1e-iopcnw

Use a Hermitian matrix in the objective
and complex variables:

https://wolfram.com/xid/0j0i5s3wwuef1e-b753at

Primal Model Properties (4)
Minimize subject to the constraint
:

https://wolfram.com/xid/0j0i5s3wwuef1e-eatofq

Get vector output using "PrimalMinimizer":

https://wolfram.com/xid/0j0i5s3wwuef1e-ee5nrp

Get the rule-based result using "PrimalMinimizerRules":

https://wolfram.com/xid/0j0i5s3wwuef1e-qivqkk

Get the minimizing value of the optimization using "PrimalMinimumValue":

https://wolfram.com/xid/0j0i5s3wwuef1e-e6i8mf

Obtain the primal minimum value using symbolic inputs:

https://wolfram.com/xid/0j0i5s3wwuef1e-5rgllp

https://wolfram.com/xid/0j0i5s3wwuef1e-8t4kbz

Extract the matrix-vector inputs of the optimization problem:

https://wolfram.com/xid/0j0i5s3wwuef1e-h1awgd
Solve using the matrix-vector form:

https://wolfram.com/xid/0j0i5s3wwuef1e-c2q70l

Recover the symbolic primal value by adding the objective constant:

https://wolfram.com/xid/0j0i5s3wwuef1e-baxzs5


https://wolfram.com/xid/0j0i5s3wwuef1e-fymgk8

Find the slack for inequalities and equalities associated with a minimization problem:

https://wolfram.com/xid/0j0i5s3wwuef1e-qznccv

https://wolfram.com/xid/0j0i5s3wwuef1e-clgvyk

Get the minimum value and the constraint matrices:

https://wolfram.com/xid/0j0i5s3wwuef1e-dlpmme
The slack for inequality constraints is a vector
such that
:

https://wolfram.com/xid/0j0i5s3wwuef1e-omohh9

The slack for the equality constraints is a vector
such that
:

https://wolfram.com/xid/0j0i5s3wwuef1e-6ce0wf

The equality slack is typically a zero vector:

https://wolfram.com/xid/0j0i5s3wwuef1e-nzy9b1

Dual Model Properties (3)

https://wolfram.com/xid/0j0i5s3wwuef1e-69tkxe

https://wolfram.com/xid/0j0i5s3wwuef1e-ioh3sv

The dual problem is to maximize subject to
:

https://wolfram.com/xid/0j0i5s3wwuef1e-zq8y68

https://wolfram.com/xid/0j0i5s3wwuef1e-ppmwpj

The primal minimum value and the dual maximum value coincide because of strong duality:

https://wolfram.com/xid/0j0i5s3wwuef1e-odh1v0

So it has a duality gap of zero. In general, at optimal points:

https://wolfram.com/xid/0j0i5s3wwuef1e-bd8o6k

Construct the dual problem using constraint matrices extracted from the primal problem:

https://wolfram.com/xid/0j0i5s3wwuef1e-yboa1r

https://wolfram.com/xid/0j0i5s3wwuef1e-yf8oy5

Extract the objective and constraint matrices and vectors:

https://wolfram.com/xid/0j0i5s3wwuef1e-xx3fa
The dual problem is to maximize subject to
:

https://wolfram.com/xid/0j0i5s3wwuef1e-bv6gxf

Get the dual maximum value directly using solution properties:

https://wolfram.com/xid/0j0i5s3wwuef1e-fkh0hn

Get the dual maximizer directly using solution properties:

https://wolfram.com/xid/0j0i5s3wwuef1e-qiys6c

Sensitivity Properties (4)
Use "ConstraintSensitivity" to find the change in optimal value due to constraint relaxations:

https://wolfram.com/xid/0j0i5s3wwuef1e-e4vks
The first vector is inequality sensitivity and the second is equality sensitivity:

https://wolfram.com/xid/0j0i5s3wwuef1e-da0bee

Consider new constraints where
is the relaxation:

https://wolfram.com/xid/0j0i5s3wwuef1e-kjywq6
The approximate new optimal value is given by:

https://wolfram.com/xid/0j0i5s3wwuef1e-334tm

Compare to directly solving the relaxed problem:

https://wolfram.com/xid/0j0i5s3wwuef1e-edvgma

Each sensitivity is associated with an inequality or equality constraint:

https://wolfram.com/xid/0j0i5s3wwuef1e-kawg90


https://wolfram.com/xid/0j0i5s3wwuef1e-v6akge
The inequality constraints and their associated sensitivity:

https://wolfram.com/xid/0j0i5s3wwuef1e-jdhr22

The equality constraints and their associated sensitivity:

https://wolfram.com/xid/0j0i5s3wwuef1e-bj3ghv

The change in optimal value due to constraint relaxation is proportional to the sensitivity value:

https://wolfram.com/xid/0j0i5s3wwuef1e-lqvweu
Compute the minimal value and constraint sensitivity:

https://wolfram.com/xid/0j0i5s3wwuef1e-gskuxg

A zero sensitivity will not change the optimal value if the constraint is relaxed:

https://wolfram.com/xid/0j0i5s3wwuef1e-5e19w


https://wolfram.com/xid/0j0i5s3wwuef1e-pmmhrs

A negative sensitivity will decrease the optimal value:

https://wolfram.com/xid/0j0i5s3wwuef1e-dr65nc


https://wolfram.com/xid/0j0i5s3wwuef1e-wf7cfi

A positive sensitivity will increase the optimal value:

https://wolfram.com/xid/0j0i5s3wwuef1e-bqtuvf


https://wolfram.com/xid/0j0i5s3wwuef1e-snki22

The "ConstraintSensitivity" is related to the dual maximizer of the problem:

https://wolfram.com/xid/0j0i5s3wwuef1e-1rlkrd

The inequality sensitivity satisfies
, where
is the dual inequality maximizer:

https://wolfram.com/xid/0j0i5s3wwuef1e-did423

The equality sensitivity satisfies
, where
is the dual equality maximizer:

https://wolfram.com/xid/0j0i5s3wwuef1e-bbxdug

Options (12)Common values & functionality for each option
Method (5)
The method "COIN" uses the COIN library:

https://wolfram.com/xid/0j0i5s3wwuef1e-cvd60c

https://wolfram.com/xid/0j0i5s3wwuef1e-2gjdsy

"CSDP" and "DSDP" reduce to semidefinite optimization:

https://wolfram.com/xid/0j0i5s3wwuef1e-7lkpj2


https://wolfram.com/xid/0j0i5s3wwuef1e-f4npl

"SCS" reduces to conic optimization:

https://wolfram.com/xid/0j0i5s3wwuef1e-eit19p

"PolyhedralApproximation" approximates the objective using linear constraints:

https://wolfram.com/xid/0j0i5s3wwuef1e-rzalw9

For least-squares-type quadratic problems, "CSDP" and "DSDP" will be slower than "COIN" or "SCS":

https://wolfram.com/xid/0j0i5s3wwuef1e-3osylz
Solve the least-squares problem using method "COIN":

https://wolfram.com/xid/0j0i5s3wwuef1e-z4t80v

Solve the problem using method "CSDP":

https://wolfram.com/xid/0j0i5s3wwuef1e-d3ebar


https://wolfram.com/xid/0j0i5s3wwuef1e-zmittt

For least-squares-type quadratic problems, "CSDP","DSDP" and "PolyhedralApproximation" will be slower than "COIN" or "SCS":

https://wolfram.com/xid/0j0i5s3wwuef1e-cm0p4a
Solve the least-squares problem using method "COIN":

https://wolfram.com/xid/0j0i5s3wwuef1e-g5plqm


https://wolfram.com/xid/0j0i5s3wwuef1e-6mu8c

Solve the problem using method "CSDP":

https://wolfram.com/xid/0j0i5s3wwuef1e-5hjpe1


https://wolfram.com/xid/0j0i5s3wwuef1e-flywyr

Solve the problem using method "PolyhedralApproximation":

https://wolfram.com/xid/0j0i5s3wwuef1e-cwzdk2


https://wolfram.com/xid/0j0i5s3wwuef1e-o438o4

Different methods may give different results for problems with more than one optimal solution:

https://wolfram.com/xid/0j0i5s3wwuef1e-px1t03


https://wolfram.com/xid/0j0i5s3wwuef1e-1ndsgg


https://wolfram.com/xid/0j0i5s3wwuef1e-1c0hav

Minimizing a concave function can be done only using Method"COIN":

https://wolfram.com/xid/0j0i5s3wwuef1e-iqfpj3

Other methods cannot be used because they require the factorization of the objective matrix:

https://wolfram.com/xid/0j0i5s3wwuef1e-d27uji


PerformanceGoal (1)
Get more accurate results at the cost of higher computation time with "Quality" setting:

https://wolfram.com/xid/0j0i5s3wwuef1e-xmzxo2

https://wolfram.com/xid/0j0i5s3wwuef1e-0m7pdv

Use "Speed" to get results quicker but at the cost of quality:

https://wolfram.com/xid/0j0i5s3wwuef1e-5q9rfc


https://wolfram.com/xid/0j0i5s3wwuef1e-parqyf

Tolerance (2)
A smaller Tolerance setting gives a more precise result:

https://wolfram.com/xid/0j0i5s3wwuef1e-yi6vnd
Find the error between computed and exact minimum value using different Tolerance settings:

https://wolfram.com/xid/0j0i5s3wwuef1e-wu4mi

Visualize the change in minimum value error with respect to tolerance:

https://wolfram.com/xid/0j0i5s3wwuef1e-lp6f28

A smaller Tolerance setting gives a more precise answer, but typically takes longer to compute:

https://wolfram.com/xid/0j0i5s3wwuef1e-uhck4d
A smaller tolerance takes longer:

https://wolfram.com/xid/0j0i5s3wwuef1e-3b6ho3


https://wolfram.com/xid/0j0i5s3wwuef1e-7hr9uc

The tighter tolerance gives a more precise answer:

https://wolfram.com/xid/0j0i5s3wwuef1e-jdk8fq

WorkingPrecision (4)
MachinePrecision is the default for the WorkingPrecision option in QuadraticOptimization:

https://wolfram.com/xid/0j0i5s3wwuef1e-ej2j10


https://wolfram.com/xid/0j0i5s3wwuef1e-iage7n

With WorkingPrecisionAutomatic, QuadraticOptimization infers the precision to use from input:

https://wolfram.com/xid/0j0i5s3wwuef1e-odlu94

QuadraticOptimization can compute results using arbitrary-precision numbers:

https://wolfram.com/xid/0j0i5s3wwuef1e-y3ddld

If the specified precision is less than the precision of the input arguments, a message is issued:

https://wolfram.com/xid/0j0i5s3wwuef1e-m32mv5



If a high-precision result cannot be computed, a message is issued and a MachinePrecision result is returned:

https://wolfram.com/xid/0j0i5s3wwuef1e-it6m2h



Applications (29)Sample problems that can be solved with this function
Basic Modeling Transformations (7)
Maximize subject to
. Solve the maximization problem by negating the objective function:

https://wolfram.com/xid/0j0i5s3wwuef1e-w453p3

Negate the primal minimum value to get the corresponding maximum value:

https://wolfram.com/xid/0j0i5s3wwuef1e-t12s65

Minimize by converting the objective function into
:

https://wolfram.com/xid/0j0i5s3wwuef1e-tl1r0t
Construct the objective function by expanding :

https://wolfram.com/xid/0j0i5s3wwuef1e-30yfvt
Since QuadraticOptimization minimizes , the matrix is multiplied by 2:

https://wolfram.com/xid/0j0i5s3wwuef1e-l9gmsk

The minimal value of the original function is recovered as :

https://wolfram.com/xid/0j0i5s3wwuef1e-wsuk9b

QuadraticOptimization directly performs this transformation. Construct the objective function using Inactive to avoid threading:

https://wolfram.com/xid/0j0i5s3wwuef1e-ugddn1

Minimize subject to the constraints
. Transform the objective function to
and solve the problem:

https://wolfram.com/xid/0j0i5s3wwuef1e-in50ye

Recover the original minimum value using transformation :

https://wolfram.com/xid/0j0i5s3wwuef1e-22vcyg


https://wolfram.com/xid/0j0i5s3wwuef1e-h12e8l

Minimize subject to the constraints
. The constraint can be transformed to
:

https://wolfram.com/xid/0j0i5s3wwuef1e-4ayviq

Minimize subject to the constraints
. The constraint
can be interpreted as
. Solve the problem with each constraint:

https://wolfram.com/xid/0j0i5s3wwuef1e-s7o6u8


https://wolfram.com/xid/0j0i5s3wwuef1e-8mvbyb

The optimal solution is the minimum of the two solutions:

https://wolfram.com/xid/0j0i5s3wwuef1e-hjoz42

Minimize subject to
, where
is a non-decreasing function, by instead minimizing
. The primal minimizer
will remain the same for both problems. Consider minimizing
subject to
:

https://wolfram.com/xid/0j0i5s3wwuef1e-iplhrk

https://wolfram.com/xid/0j0i5s3wwuef1e-qbqu1n

The true minimum value can be obtained by applying the function to the primal minimum value:

https://wolfram.com/xid/0j0i5s3wwuef1e-pkk8xy


https://wolfram.com/xid/0j0i5s3wwuef1e-l7wv9y

https://wolfram.com/xid/0j0i5s3wwuef1e-llr50x

Since , the solution is the true solution only if the primal minimum value is greater than 0. The true minimum value can be obtained by applying the function
to the primal minimum value:

https://wolfram.com/xid/0j0i5s3wwuef1e-fztpat

Data-Fitting Problems (7)
Find a linear fit to discrete data by minimizing :

https://wolfram.com/xid/0j0i5s3wwuef1e-f32jej

Construct the factored quadratic matrix using DesignMatrix:

https://wolfram.com/xid/0j0i5s3wwuef1e-v8calj
Find the coefficients of the line:

https://wolfram.com/xid/0j0i5s3wwuef1e-h6szxp


https://wolfram.com/xid/0j0i5s3wwuef1e-r5iwru

Find a quadratic fit to discrete data by minimizing :

https://wolfram.com/xid/0j0i5s3wwuef1e-buoc8y

Construct the factored quadratic matrix using DesignMatrix:

https://wolfram.com/xid/0j0i5s3wwuef1e-ij3bl
Find the coefficients of the quadratic curve:

https://wolfram.com/xid/0j0i5s3wwuef1e-7bxdzz


https://wolfram.com/xid/0j0i5s3wwuef1e-xz5fvm

Fit a quadratic curve discrete data such that the first and last points of the data lie on the curve:

https://wolfram.com/xid/0j0i5s3wwuef1e-znbwtk

Construct the factored quadratic matrix using DesignMatrix:

https://wolfram.com/xid/0j0i5s3wwuef1e-7u326f
The two equality constraints are:

https://wolfram.com/xid/0j0i5s3wwuef1e-375n0k
Find the coefficients of the line:

https://wolfram.com/xid/0j0i5s3wwuef1e-kjy532


https://wolfram.com/xid/0j0i5s3wwuef1e-elazro

Find an interpolating function to noisy data using bases :

https://wolfram.com/xid/0j0i5s3wwuef1e-ujbkq1

The interpolating function will be :

https://wolfram.com/xid/0j0i5s3wwuef1e-c8dotz
Find the coefficients of the interpolating function:

https://wolfram.com/xid/0j0i5s3wwuef1e-osam7t


https://wolfram.com/xid/0j0i5s3wwuef1e-wco61m

Minimize subject to the constraints
:

https://wolfram.com/xid/0j0i5s3wwuef1e-sqawoj

https://wolfram.com/xid/0j0i5s3wwuef1e-yitrad

Compare with the unconstrained minimum of :

https://wolfram.com/xid/0j0i5s3wwuef1e-mn4zhk

Cardinality constrained least squares: minimize such that
has at most
nonzero elements:

https://wolfram.com/xid/0j0i5s3wwuef1e-kqoej1
Let be a decision vector such that if
, then
is nonzero. The decision constraints are:

https://wolfram.com/xid/0j0i5s3wwuef1e-ui45e5
To model constraint when
, chose a large constant
such that
:

https://wolfram.com/xid/0j0i5s3wwuef1e-fk9stg
Solve the cardinality constrained least-squares problem:

https://wolfram.com/xid/0j0i5s3wwuef1e-17mj2m

The subset selection can also be done more efficiently with Fit using regularization. First, find the range of regularization parameters that uses at most
basis functions:

https://wolfram.com/xid/0j0i5s3wwuef1e-4iq2k

Find the nonzero terms in the regularized fit:

https://wolfram.com/xid/0j0i5s3wwuef1e-dhf90x

Find the fit with just these basis terms:

https://wolfram.com/xid/0j0i5s3wwuef1e-ixqa7x

Find the best subset of functions from a candidate set of functions to approximate given data:

https://wolfram.com/xid/0j0i5s3wwuef1e-t36hh6
The approximating function will be :

https://wolfram.com/xid/0j0i5s3wwuef1e-7h4ybn
A maximum of 5 basis functions is to be used in the final approximation:

https://wolfram.com/xid/0j0i5s3wwuef1e-g9joon
The coefficients associated with functions that are not chosen must be zero:

https://wolfram.com/xid/0j0i5s3wwuef1e-bt3aoy
Find the best subset of functions:

https://wolfram.com/xid/0j0i5s3wwuef1e-h89rpt

Compare the resulting approximating with the given data:

https://wolfram.com/xid/0j0i5s3wwuef1e-7u0smb

Classification Problems (2)
Find a plane that separates two groups of 3D points
and
:

https://wolfram.com/xid/0j0i5s3wwuef1e-eb19hk

https://wolfram.com/xid/0j0i5s3wwuef1e-r7tgge

For separation, set 1 must satisfy and set 2 must satisfy
. Find the hyperplane by minimizing
:

https://wolfram.com/xid/0j0i5s3wwuef1e-30xuxn

The distance between the planes and
is
:

https://wolfram.com/xid/0j0i5s3wwuef1e-6g9gpy

The plane separating the two groups of points is:

https://wolfram.com/xid/0j0i5s3wwuef1e-mhfrj4

Plot the plane separating the two datasets:

https://wolfram.com/xid/0j0i5s3wwuef1e-gkifg

Find a quadratic polynomial that separates two groups of 3D points and
:

https://wolfram.com/xid/0j0i5s3wwuef1e-0kz712

Construct the quadratic polynomial data matrices for the two sets using DesignMatrix:

https://wolfram.com/xid/0j0i5s3wwuef1e-r437k9
For separation, set 1 must satisfy and set 2 must satisfy
. Find the separating surface by minimizing
:

https://wolfram.com/xid/0j0i5s3wwuef1e-rzge5a

The polynomial separating the two groups of points is:

https://wolfram.com/xid/0j0i5s3wwuef1e-1vrepp

Plot the polynomial separating the two datasets:

https://wolfram.com/xid/0j0i5s3wwuef1e-v2nioo

Geometric Problems (3)
Find a point closest to the point
that lies on the planes
and
:

https://wolfram.com/xid/0j0i5s3wwuef1e-urz7a9
Find the point closest to by minimizing
. Use Inactive Plus when constructing the objective:

https://wolfram.com/xid/0j0i5s3wwuef1e-i77t1p


https://wolfram.com/xid/0j0i5s3wwuef1e-er8rrl

Find the distance between two convex polyhedra:

https://wolfram.com/xid/0j0i5s3wwuef1e-687o33

https://wolfram.com/xid/0j0i5s3wwuef1e-8xh5dj

Show the nearest points with the line connecting them:

https://wolfram.com/xid/0j0i5s3wwuef1e-7w3nfo

Find the radius and center
of a minimal enclosing ball that encompasses a given region:

https://wolfram.com/xid/0j0i5s3wwuef1e-1xi3kb
The original minimization problem is to minimize subject to
. The dual of this problem is to maximize
subject to
:

https://wolfram.com/xid/0j0i5s3wwuef1e-k2359q
Solve the dual maximization problem:

https://wolfram.com/xid/0j0i5s3wwuef1e-xqidvw

The center of the minimal enclosing ball is :

https://wolfram.com/xid/0j0i5s3wwuef1e-bdddfw

The radius of the minimal enclosing ball is Sqrt of the maximum value:

https://wolfram.com/xid/0j0i5s3wwuef1e-5v0nv9


https://wolfram.com/xid/0j0i5s3wwuef1e-5k4m91

The minimal enclosing ball can be found efficiently using BoundingRegion:

https://wolfram.com/xid/0j0i5s3wwuef1e-1dvvd5

Investment Problems (3)
Find the number of stocks to buy from four stocks, such that a minimum $1000 dividend is received and risk is minimized. The expected return value and the covariance matrix associated with the stocks are:

https://wolfram.com/xid/0j0i5s3wwuef1e-mo41fq
The unit price for the four stocks is $1. Each stock can be allocated a maximum of $2500:

https://wolfram.com/xid/0j0i5s3wwuef1e-upnsb3
The investment must yield a minimum of $1000:

https://wolfram.com/xid/0j0i5s3wwuef1e-9enf9g
A negative stock cannot be bought:

https://wolfram.com/xid/0j0i5s3wwuef1e-3ymyhu
The total amount to spend on each stock is found by minimizing the risk given by :

https://wolfram.com/xid/0j0i5s3wwuef1e-ib8vjh

The total investment to get a minimum of $1000 is:

https://wolfram.com/xid/0j0i5s3wwuef1e-q1hefe

Find the number of stocks to buy from four stocks, with an option to short-sell such that a minimum dividend of $1000 is received and the overall risk is minimized:

https://wolfram.com/xid/0j0i5s3wwuef1e-ivredy
The capital constraint and return on investment constraints are:

https://wolfram.com/xid/0j0i5s3wwuef1e-tqkgpn
A short-sale option allows the stock to be sold. The optimal number of stocks to buy/short-sell is found by minimizing the risk given by the objective :

https://wolfram.com/xid/0j0i5s3wwuef1e-mmowbq

The second stock can be short-sold. The total investment to get a minimum of $1000 due to the short-selling is:

https://wolfram.com/xid/0j0i5s3wwuef1e-xu4dxk

Without short-selling, the initial investment will be significantly greater:

https://wolfram.com/xid/0j0i5s3wwuef1e-u5sjbh

Find the best combination of six stocks to invest in out of a possible 20 candidate stocks, so as to maximize return while minimizing risk:

https://wolfram.com/xid/0j0i5s3wwuef1e-ht7vts
Let be the percentage of total investment made in stock
. The return is given by
, where
is a vector of the expected return value of each individual stock:

https://wolfram.com/xid/0j0i5s3wwuef1e-cm5iu
Let be a decision vector such that if
, then that stock is bought. Six stocks have to be chosen:

https://wolfram.com/xid/0j0i5s3wwuef1e-e0cxlu
The percentage of investment must be greater than 0 and must add to 1:

https://wolfram.com/xid/0j0i5s3wwuef1e-f7dzdq
Find the optimal combination of stocks that minimizes the risk given by and maximizes return:

https://wolfram.com/xid/0j0i5s3wwuef1e-u37cde
The optimal combination of stocks is:

https://wolfram.com/xid/0j0i5s3wwuef1e-h3uskt

The percentages of investment to put into the respective stocks are:

https://wolfram.com/xid/0j0i5s3wwuef1e-9ptfb7

Portfolio Optimization (1)
Find the distribution of capital to invest in six stocks to maximize return while minimizing risk:

https://wolfram.com/xid/0j0i5s3wwuef1e-kut0cq
Let be the percentage of total investment made in stock
. The return is given by
, where
is a vector of expected return value of each individual stock:

https://wolfram.com/xid/0j0i5s3wwuef1e-2z1lpt
The risk is given by , and
is a risk-aversion parameter:

https://wolfram.com/xid/0j0i5s3wwuef1e-get9k6
The objective is to maximize return while minimizing risk for a specified risk-aversion parameter :

https://wolfram.com/xid/0j0i5s3wwuef1e-61n2dz
The percentage of investment must be greater than 0 and must add to 1:

https://wolfram.com/xid/0j0i5s3wwuef1e-vgdxhc
Compute the returns and corresponding risk for a range of risk-aversion parameters:

https://wolfram.com/xid/0j0i5s3wwuef1e-7wefwc
The optimal over a range of
gives an upper-bound envelope on the tradeoff between return and risk:

https://wolfram.com/xid/0j0i5s3wwuef1e-bnbuge

Compute the percentage of investment for a specified number of risk-aversion parameters:

https://wolfram.com/xid/0j0i5s3wwuef1e-ke669a
Increasing the risk-aversion parameter leads to stock diversification to reduce the risk:

https://wolfram.com/xid/0j0i5s3wwuef1e-4lr4vy

Increasing the risk-aversion parameter leads to a reduced expected return on investment:

https://wolfram.com/xid/0j0i5s3wwuef1e-uhbget

Trajectory Optimization Problems (2)
Minimize subject to
. The minimizing function integral can be approximated using the trapezoidal rule. The discretized objective function will be
:

https://wolfram.com/xid/0j0i5s3wwuef1e-qd2680
The constraint can be discretized using finite differences:

https://wolfram.com/xid/0j0i5s3wwuef1e-da08ge
The constraints can be represented using the Indexed function:

https://wolfram.com/xid/0j0i5s3wwuef1e-9jdpkq
The constraints can be discretized using finite differences, and only the first and last rows are used:

https://wolfram.com/xid/0j0i5s3wwuef1e-plkprf
Solve the discretized trajectory problem:

https://wolfram.com/xid/0j0i5s3wwuef1e-191vca

Convert the discretized result into an InterpolatingFunction:

https://wolfram.com/xid/0j0i5s3wwuef1e-et012k
Compare the result with the analytic solution:

https://wolfram.com/xid/0j0i5s3wwuef1e-q900pk

Find the shortest path between two points while avoiding obstacles. Specify the obstacle:

https://wolfram.com/xid/0j0i5s3wwuef1e-1bblif
Extract the half-spaces that form the convex obstacle:

https://wolfram.com/xid/0j0i5s3wwuef1e-emvvkg
Specify the start and end points of the path:

https://wolfram.com/xid/0j0i5s3wwuef1e-fhodm2
The path can be discretized using points. Let
represent the position vector:

https://wolfram.com/xid/0j0i5s3wwuef1e-owypfs
The objective is to minimize . Let
. The objective is transformed to
:

https://wolfram.com/xid/0j0i5s3wwuef1e-gwwelr
Specify the end point constraints:

https://wolfram.com/xid/0j0i5s3wwuef1e-inywna
The distance between any two subsequent points should not be too large:

https://wolfram.com/xid/0j0i5s3wwuef1e-7y6188
A point is outside the object if at least one element of
is less than zero. To enforce this constraint, let
be a decision vector and
be the
element of
such that
, then
and
is large enough such that
:

https://wolfram.com/xid/0j0i5s3wwuef1e-ykr4qm
Find the minimum distance path around the obstacle:

https://wolfram.com/xid/0j0i5s3wwuef1e-tvt1pd

https://wolfram.com/xid/0j0i5s3wwuef1e-qnwnww

To avoid potential crossings at the edges, the region can be inflated and the problem solved again:

https://wolfram.com/xid/0j0i5s3wwuef1e-0njwsf
Get the new constraints for avoiding the obstacles:

https://wolfram.com/xid/0j0i5s3wwuef1e-uoayku

https://wolfram.com/xid/0j0i5s3wwuef1e-xx3mdi
Extract and display the new path:

https://wolfram.com/xid/0j0i5s3wwuef1e-hbptou

Optimal Control Problems (2)

https://wolfram.com/xid/0j0i5s3wwuef1e-cht3cu
The integral can be discretized using the trapezoidal method:

https://wolfram.com/xid/0j0i5s3wwuef1e-7tms3f

https://wolfram.com/xid/0j0i5s3wwuef1e-p02ppi
The time derivative in is discretized using finite differences:

https://wolfram.com/xid/0j0i5s3wwuef1e-no40t5
The end-condition constraints can be specified using Indexed:

https://wolfram.com/xid/0j0i5s3wwuef1e-6j23oq
Solve the discretized problem:

https://wolfram.com/xid/0j0i5s3wwuef1e-mq52up
Convert the discretized result into InterpolatingFunction:

https://wolfram.com/xid/0j0i5s3wwuef1e-b8y59x

https://wolfram.com/xid/0j0i5s3wwuef1e-rvuj4l


https://wolfram.com/xid/0j0i5s3wwuef1e-udvoru

Minimize subject to
and
and
on the control variable:

https://wolfram.com/xid/0j0i5s3wwuef1e-h4i50g
The integral can be discretized using the trapezoidal method:

https://wolfram.com/xid/0j0i5s3wwuef1e-l43mib

https://wolfram.com/xid/0j0i5s3wwuef1e-hxcd9v
The time derivative in is discretized using finite differences:

https://wolfram.com/xid/0j0i5s3wwuef1e-za6yv4
The end-condition constraints can be specified using Indexed:

https://wolfram.com/xid/0j0i5s3wwuef1e-3ck2bn
The constraint on the control variable is:

https://wolfram.com/xid/0j0i5s3wwuef1e-saoil9
Solve the discretized problem:

https://wolfram.com/xid/0j0i5s3wwuef1e-lem1yj
Convert the discretized result into InterpolatingFunction:

https://wolfram.com/xid/0j0i5s3wwuef1e-mwhgqi
The control variable is now restricted between and 5:

https://wolfram.com/xid/0j0i5s3wwuef1e-fg8rxj


https://wolfram.com/xid/0j0i5s3wwuef1e-4bu13q

Sequential Quadratic Optimization (2)
Minimize a nonlinear function subject to nonlinear constraints
. The minimization can be done by approximating
as
and the constraints as
as
. This leads to a quadratic minimization problem that can be solved iteratively. Consider a case where
and
:

https://wolfram.com/xid/0j0i5s3wwuef1e-papad1
The gradient and Hessian of the minimizing function are:

https://wolfram.com/xid/0j0i5s3wwuef1e-5xgm0v
The gradient of the constraints is:

https://wolfram.com/xid/0j0i5s3wwuef1e-v5ha8x
The subproblem is to find by minimizing
subject to
:

https://wolfram.com/xid/0j0i5s3wwuef1e-db44j1
Iterate starting with an initial guess of . The next iterate is
, where
is the step length to ensure that the constraints are always satisfied:

https://wolfram.com/xid/0j0i5s3wwuef1e-o2o4am

Visualize the convergence of the result. The final result is the green point:

https://wolfram.com/xid/0j0i5s3wwuef1e-8vimlt


https://wolfram.com/xid/0j0i5s3wwuef1e-msks02
The gradient and Hessian of the minimizing function are:

https://wolfram.com/xid/0j0i5s3wwuef1e-yv3stl
The gradient of the constraints is:

https://wolfram.com/xid/0j0i5s3wwuef1e-lux6l0
The subproblem is to minimize , subject to
and
:

https://wolfram.com/xid/0j0i5s3wwuef1e-jwzjbn
Iterate starting with the initial guess :

https://wolfram.com/xid/0j0i5s3wwuef1e-xj29lt

Visualize the convergence of the result. The final result is the green point:

https://wolfram.com/xid/0j0i5s3wwuef1e-y3znv7

Properties & Relations (9)Properties of the function, and connections to other functions
QuadraticOptimization gives the global minimum of the objective function:

https://wolfram.com/xid/0j0i5s3wwuef1e-jo7wi2

Visualize the objective function:

https://wolfram.com/xid/0j0i5s3wwuef1e-550srr

The minimizer can be in the interior or at the boundary of the feasible region:

https://wolfram.com/xid/0j0i5s3wwuef1e-3bxdfx

Minimize gives global exact results for quadratic optimization problems:

https://wolfram.com/xid/0j0i5s3wwuef1e-vlbapq

NMinimize can be used to obtain approximate results using global methods:

https://wolfram.com/xid/0j0i5s3wwuef1e-9quyrf

FindMinimum can be used to obtain approximate results using local methods:

https://wolfram.com/xid/0j0i5s3wwuef1e-3ucfdt

LinearOptimization is a special case of QuadraticOptimization:

https://wolfram.com/xid/0j0i5s3wwuef1e-ff0sko


https://wolfram.com/xid/0j0i5s3wwuef1e-3gi3in

In matrix-vector form, the quadratic term is set to 0:

https://wolfram.com/xid/0j0i5s3wwuef1e-hf45ps

SecondOrderConeOptimization is a generalization of QuadraticOptimization:

https://wolfram.com/xid/0j0i5s3wwuef1e-vey7ss

https://wolfram.com/xid/0j0i5s3wwuef1e-ik1xs3

Use auxiliary variable and minimize
with additional constraint
:

https://wolfram.com/xid/0j0i5s3wwuef1e-iiyn0s

SemidefiniteOptimization is a generalization of QuadraticOptimization:

https://wolfram.com/xid/0j0i5s3wwuef1e-2jmqnf

https://wolfram.com/xid/0j0i5s3wwuef1e-ld7uqm

Use auxiliary variable and minimize
with additional constraint
:

https://wolfram.com/xid/0j0i5s3wwuef1e-fmpsev

ConicOptimization is a generalization of QuadraticOptimization:

https://wolfram.com/xid/0j0i5s3wwuef1e-jjgpir

https://wolfram.com/xid/0j0i5s3wwuef1e-z5w02s

Use auxiliary variable and minimize
with additional constraint
:

https://wolfram.com/xid/0j0i5s3wwuef1e-4odka

Possible Issues (6)Common pitfalls and unexpected behavior
Constraints specified using strict inequalities may not be satisfied for certain methods:

https://wolfram.com/xid/0j0i5s3wwuef1e-52nqk0
The reason is that QuadraticOptimization solves :

https://wolfram.com/xid/0j0i5s3wwuef1e-qniz17

The minimum value of an empty set or infeasible problem is defined to be :

https://wolfram.com/xid/0j0i5s3wwuef1e-breti6


The minimizer is Indeterminate:

https://wolfram.com/xid/0j0i5s3wwuef1e-uacgr


The minimum value for an unbounded set or unbounded problem is :

https://wolfram.com/xid/0j0i5s3wwuef1e-hv52az


The minimizer is Indeterminate:

https://wolfram.com/xid/0j0i5s3wwuef1e-6t11l3


Certain solution properties are not available for symbolic input:

https://wolfram.com/xid/0j0i5s3wwuef1e-han5rw

https://wolfram.com/xid/0j0i5s3wwuef1e-wf88ss


https://wolfram.com/xid/0j0i5s3wwuef1e-jka95d

Dual related solution properties for mixed-integer problems may not be available:

https://wolfram.com/xid/0j0i5s3wwuef1e-4qteew

Constraints with complex values need to be specified using vector inequalities:

https://wolfram.com/xid/0j0i5s3wwuef1e-gw60b6

Just using GreaterEqual will not work:

https://wolfram.com/xid/0j0i5s3wwuef1e-n9o3



Wolfram Research (2019), QuadraticOptimization, Wolfram Language function, https://reference.wolfram.com/language/ref/QuadraticOptimization.html (updated 2020).
Text
Wolfram Research (2019), QuadraticOptimization, Wolfram Language function, https://reference.wolfram.com/language/ref/QuadraticOptimization.html (updated 2020).
Wolfram Research (2019), QuadraticOptimization, Wolfram Language function, https://reference.wolfram.com/language/ref/QuadraticOptimization.html (updated 2020).
CMS
Wolfram Language. 2019. "QuadraticOptimization." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2020. https://reference.wolfram.com/language/ref/QuadraticOptimization.html.
Wolfram Language. 2019. "QuadraticOptimization." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2020. https://reference.wolfram.com/language/ref/QuadraticOptimization.html.
APA
Wolfram Language. (2019). QuadraticOptimization. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/QuadraticOptimization.html
Wolfram Language. (2019). QuadraticOptimization. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/QuadraticOptimization.html
BibTeX
@misc{reference.wolfram_2025_quadraticoptimization, author="Wolfram Research", title="{QuadraticOptimization}", year="2020", howpublished="\url{https://reference.wolfram.com/language/ref/QuadraticOptimization.html}", note=[Accessed: 03-May-2025
]}
BibLaTeX
@online{reference.wolfram_2025_quadraticoptimization, organization={Wolfram Research}, title={QuadraticOptimization}, year={2020}, url={https://reference.wolfram.com/language/ref/QuadraticOptimization.html}, note=[Accessed: 03-May-2025
]}