---
title: "LinearFractionalOptimization"
language: "en"
type: "Symbol"
summary: "LinearFractionalOptimization[f, cons, vars] finds values of variables vars that minimize the linear fractional objective f subject to linear constraints cons. LinearFractionalOptimization[{\\[Alpha], \\[Beta], \\[Gamma], \\[Delta]}, \\ {a, b}] finds a vector x that minimizes the linear fractional function (\\[Alpha] . x + \\[Beta])/(\\[Gamma] . x + \\[Delta]) subject to the linear inequality constraints a . x + b \\[SucceedsEqual] 0. LinearFractionalOptimization[{\\[Alpha], \\[Beta], \\[Gamma], \\[Delta]}, \\ {a, b}, {aeq, beq}] includes the linear equality constraints aeq . x + beq == 0. LinearFractionalOptimization[{\\[Alpha], \\[Beta], \\[Gamma], \\[Delta]}, \\ ..., {dom1, dom2, ...}] takes xi to be in the domain domi, where domi is Integers or Reals. LinearFractionalOptimization[...,  prop] specifies what solution property  prop should be returned."
keywords: 
- LFP
- LFO
- linear fractional programming
- linear fractional optimization
- convex optimization
- convex programming
canonical_url: "https://reference.wolfram.com/language/ref/LinearFractionalOptimization.html"
source: "Wolfram Language Documentation"
related_guides: 
  - 
    title: "Convex Optimization"
    link: "https://reference.wolfram.com/language/guide/ConvexOptimization.en.md"
  - 
    title: "Optimization"
    link: "https://reference.wolfram.com/language/guide/Optimization.en.md"
  - 
    title: "Symbolic Vectors, Matrices and Arrays"
    link: "https://reference.wolfram.com/language/guide/SymbolicArrays.en.md"
related_functions: 
  - 
    title: "LinearOptimization"
    link: "https://reference.wolfram.com/language/ref/LinearOptimization.en.md"
  - 
    title: "QuadraticOptimization"
    link: "https://reference.wolfram.com/language/ref/QuadraticOptimization.en.md"
  - 
    title: "FindMinimum"
    link: "https://reference.wolfram.com/language/ref/FindMinimum.en.md"
  - 
    title: "NMinimize"
    link: "https://reference.wolfram.com/language/ref/NMinimize.en.md"
  - 
    title: "Minimize"
    link: "https://reference.wolfram.com/language/ref/Minimize.en.md"
related_tutorials: 
  - 
    title: "Optimization Method Framework"
    link: "https://reference.wolfram.com/language/OptimizationMethodFramework/tutorial/OptimizationMethodFramework.en.md"
---
# LinearFractionalOptimization

LinearFractionalOptimization[f, cons, vars] finds values of variables vars that minimize the linear fractional objective f subject to linear constraints cons.

LinearFractionalOptimization[{α, β, γ, δ}, {a, b}] finds a vector $x$ that minimizes the linear fractional function $(\alpha .x+\beta )/(\gamma .x+\delta )$ subject to the linear inequality constraints $a.x+b\succeq 0$. 

LinearFractionalOptimization[{α, β, γ, δ}, {a, b}, {aeq, beq}] includes the linear equality constraints $a_{\text{\textit{eq}}}.x+b_{\text{\textit{eq}}}=0$.

LinearFractionalOptimization[{α, β, γ, δ}, …, {dom1, dom2, …}] takes $x_i$ to be in the domain domi, where domi is Integers or Reals. 

LinearFractionalOptimization[…, "prop"] specifies what solution property "prop" should be returned.

## Details and Options

* Linear fractional optimization is also known as linear fractional programming (LFP) and mixed-integer fractional linear programming (MILFP).

* Linear fractional optimization is a convex optimization problem that can be solved globally and efficiently with real, integer or complex variables.

* Linear fractional optimization finds $x\in \mathbb{R}^n$ that solves the primal problem: »

|     |     |
| --- | --- |
| minimize | $(\alpha .x+\beta )/(\gamma .x+\delta )$ |
| subject to constraints | $a.x+b\unicode{f435}0,\\ a_{eq}.x+b_{eq}=0$ |
| where | $\beta ,\delta \in \mathbb{R},\alpha ,\gamma ,a\in \mathbb{R}^{k\times n},b\in \ma ... ext{\textit{eq}}}\in \mathbb{R}^{l\times n},b_{\text{\textit{eq}}}\in \mathbb{R}^l$ |

* Mixed-integer linear fractional optimization finds $x\in \mathbb{R}^{n_r}$ and $y\in \mathbb{Z}^{n_i}$ that solve the problem:

|     |     |
| --- | --- |
| minimize | $\left(\alpha _r.x+\alpha _i.y+\beta \right)/\left(\gamma _r.x+\gamma _i.y+\delta \right)$ |
| subject to constraints | $a_r.x+a_i.y+b\unicode{f435}0,\\ a_{eq_r}.x+a_{eq_i}.y+b_{eq}=0$ |
| where | $\beta ,\delta \in \mathbb{R},\alpha _r,\gamma _r,x\in \mathbb{R}^{n_r},\alpha _i, ... s n_r},a_{eq_i}\in \mathbb{R}^{l\times n_i},b_{\text{\textit{eq}}}\in \mathbb{R}^l$ |

* When the objective function is real valued, ``LinearFractionalOptimization`` solves problems with $x\in \mathbb{C}^n$ by internally converting to real variables $x= x_r+i x_i$, where $x_r\in \mathbb{R}^n$ and $x_i\in \mathbb{R}^n$.

[image]

* The variable specification ``vars`` should be a list with elements giving variables in one of the following forms:

|     |     |
| --- | --- |
| v   | variable with name $v$ and dimensions inferred |
| v∈Reals | real scalar variable |
| v∈Integers | integer scalar variable |
| v∈Complexes | complex scalar variable |
| v∈ℛ | vector variable restricted to the geometric region $\mathcal{R}$ |
| v∈Vectors[n, dom] | vector variable in $\mathbb{R}^n$ or $\mathbb{Z}^n$ |
| v∈Matrices[{m, n}, dom] | matrix variable in $\mathbb{R}^{m n}$ or $\mathbb{Z}^{m n}$ |

* The constraints ``cons`` can be specified by:

|                    |                                                           |                                 |
| ------------------ | --------------------------------------------------------- | ------------------------------- |
| LessEqual          | $a.x+b\leq 0$               | scalar inequality               |
| GreaterEqual       | $a.x+b\geq 0$               | scalar inequality               |
| VectorLessEqual    | $a.x+b\unicode{f437}0$      | vector inequality               |
| VectorGreaterEqual | $a.x+b\unicode{f435}0$      | vector inequality               |
| Equal              | $a.x+b=0$                   | scalar or vector equality       |
| Element            | $x \in \text{\textit{dom}}$ | convex domain or region element |

* With ``LinearFractionalOptimization[f, cons, vars]``, parameter equations of the form ``par == val``, where ``par`` is not in ``vars`` and ``val`` is numerical or an array with numerical values, may be included in the constraints to define parameters used in ``f`` or ``cons``.

* The primal minimization problem has a related maximization problem that is the Lagrangian dual problem. The dual maximum value is always less than or equal to the primal minimum value, so it provides a lower bound. The dual maximizer provides information about the primal problem, including sensitivity of the minimum value to changes in the constraints.

* The Lagrangian dual problem for linear fractional optimization is given by:  »

|     |     |
| --- | --- |
| maximize | $\zeta$ |
| subject to constraints | $\delta  \zeta  + b.\lambda +b_{\text{\textit{eq}}}.\nu  \leq  \beta ,\\ \zeta \ga ... T}.\lambda +a_{\text{\textit{eq}}}\mathsf{T}.\nu  == \alpha ,\\ \lambda \succeq 0,$ |
| where | $\zeta ,\beta ,\delta \in \mathbb{R},\alpha ,\gamma \in \mathbb{R}^n,a\in \mathbb{ ... extit{eq}}}\in \mathbb{R}^{l\times n},b_{\text{\textit{eq}}},\nu \in \mathbb{R}^l.$ |

* For linear fractional optimization, strong duality always holds, meaning that if there is a solution to the primal minimization problem, then there is a solution to the dual maximization problem, and the dual maximum value is equal to the primal minimum value.

* The possible solution properties ``"prop"`` include:

"PrimalMinimizer"	{\!\(
\*SubsuperscriptBox[\(v\), \(1\), \(\*\)], \[Ellipsis]\)}	a list of variable values that minimize the objective function
      	"PrimalMinimizerRules"	{Subscript[v, 1]->Subsuperscript[v, 1, \*],\[Ellipsis]}	values for the variables vars={Subscript[v, 1],\[Ellipsis]} that minimize f
      	"PrimalMinimizerVector"	x^\*	the vector that minimizes (\[Alpha].x+\[Beta])/(\[Gamma].x+\[Delta])
      	"PrimalMinimumValue"	(\[Alpha].x^\*+\[Beta])/(\[Gamma].x^\*+\[Delta])	the minimum value f^\* 
      	"DualMaximizer"	{\[Zeta]^\*,\[Lambda]^\*,\[Nu]^\*}	the vectors that maximize \[Zeta]
      	"DualMaximumValue"	\[Zeta]^\*	the dual maximum value
      	"DualityGap"	\[Zeta]^\*- c.x^\*	the difference between the dual and primal optimal values (0 because of strong duality)
      	"Slack"	{a.x^\*+b,Subscript[a, eq].x^\*+Subscript[b, eq]}	the constraint slack vector
      	"ConstraintSensitivity"	{\[PartialD]f^\*/\[PartialD]b,\[PartialD]f^\*/\[PartialD]Subscript[b, eq]}	sensitivity of f^\* to constraint perturbations
      	"LinearFractionalObjectiveCoefficients"	{\[Alpha],\[Beta],\[Gamma],\[Delta]}	the coefficients in f
      	"LinearInequalityConstraints"	{a,b}	the linear inequality constraint matrix and vector
      	"LinearEqualityConstraints"	{Subscript[a, eq],Subscript[b, eq]}	the linear equality constraint matrix and vector
      	{"Subscript[prop, 1]","Subscript[prop, 2]",\[Ellipsis]}	 	several solution properties

* The dual maximum value $\zeta$ is related to $\lambda$ and $\nu$ through $\zeta \gamma +a\mathsf{T}.\lambda +a_{\text{\textit{eq}}}\mathsf{T}.\nu  == \alpha$.

* The following options may be given:

|                   |                   |                                               |
| ----------------- | ----------------- | --------------------------------------------- |
| MaxIterations     | Automatic         | maximum number of iterations to use           |
| Method            | Automatic         | the method to use                             |
| PerformanceGoal   | \$PerformanceGoal | aspects of performance to try to optimize     |
| Tolerance         | Automatic         | the tolerance to use for internal comparisons |
| WorkingPrecision  | Automatic         | precision to use in internal computations     |

* The option ``Method -> method`` may be used to specify the method to use. Available methods include:

|                  |                                                          |
| ---------------- | -------------------------------------------------------- |
| Automatic        | choose the method automatically                          |
| "Simplex"        | simplex method                                           |
| "RevisedSimplex" | revised simplex method                                   |
| "InteriorPoint"  | interior point method (machine precision only)           |
| "CLP"            | COIN library linear programming (machine precision only) |
| "MOSEK"          | commercial MOSEK linear optimization solver              |
| "Gurobi"         | commercial Gurobi linear optimization solver             |
| "Xpress"         | commercial Xpress linear optimization solver             |

* With ``WorkingPrecision -> Automatic``, the precision is taken automatically from the precision of the input arguments unless a method is specified that only works with machine precision, in which case machine precision is used.

* If it is a priori known that the solution sought is where the denominator is either positive or negative, then including a constraint in ``a`` and ``b`` equivalent to positivity or negativity of the denominator will allow ``LinearFractionalOptimization`` to compute the solution more quickly.

---

## Examples (71)

### Basic Examples (3)

Minimize $(x+ y)/(1+3x+2 y)$ subject to the constraints $x+y\geq 2, x\geq 0, y\geq 0$ :

```wl
In[1]:= res = LinearFractionalOptimization[(x + y) / (1 + 3x + 2y), {x + y ≥ 2, x ≥ 0, y ≥ 0}, {x, y}]

Out[1]= {x -> 2, y -> 0}
```

Show the minimizer on a plot of the function $(x+ y)/(1+3x+2 y)$ over the feasible region:

```wl
In[2]:=
obj = (x + y) / (1 + 3x + 2y);
Show[Plot3D[obj, {x, 0, 4}, {y, 0, 4}, ...], Graphics3D[{Blue, PointSize[0.05], Point[{x, y, obj} /. res]}]]

Out[2]= [image]
```

---

Minimize $(5x-3y+2)/(4x+y+5)$ subject to $x+2y\leq 4,x+3y\geq 6,x\geq 0,y\geq 0$:

```wl
In[1]:= LinearFractionalOptimization[(5x - 3y + 2) / (4x + y + 5), {x + 2y ≤ 4, x + 3y ≥ 6, x ≥ 0, y ≥ 0}, {x, y}]

Out[1]= {x -> 0, y -> 2}
```

Specify the input in matrix-vector form:

```wl
In[2]:=
{α, β, γ, δ} = {{5, -3}, 2, {4, 1}, 5};
{a, b} = {{{-1, -2}, {1, 3}, {1, 0}, {0, 1}}, {4, -6, 0, 0}};
```

Solve using matrix-vector form:

```wl
In[3]:= LinearFractionalOptimization[{α, β, γ, δ}, {a, b}]

Out[3]= {0, 2}
```

---

Minimize $(5x-3y+2)/(4x+y+5)$ subject to $x+2y\leq 4,x+3y\geq 6,y\leq 2.5,x\in \mathbb{Z},y\in \mathbb{R}$:

```wl
In[1]:= LinearFractionalOptimization[(5x - 3y + 2) / (4x + y + 5), {x + 2y ≤ 4, x + 3y ≥ 6, y ≤ 2.5}, {x∈Integers, y∈Reals}]

Out[1]= {x -> -1, y -> 2.33333}
```

Use the equivalent matrix-vector representation:

```wl
In[2]:=
{α, β, γ, δ} = {{5, -3}, 2, {4, 1}, 5};
{a, b} = {{{-1, -2}, {1, 3}, {0, -1}}, {4, -6, 2.5}};
LinearFractionalOptimization[{α, β, γ, δ}, {a, b}, {Integers, Reals}]

Out[2]= {-1, 2.33333}
```

### Scope (33)

#### Basic Uses (17)

Minimize $(x+y-5)/ ( 3x+2y+15)$ subject to the constraints $3x+4y\geq  12$ and $y\geq 2$ :

```wl
In[1]:= LinearFractionalOptimization[(x + y - 5) / (3x + 2y + 15), {3x + 4y ≥ 12, x ≥ 2}, {x, y}]

Out[1]= {x -> 2, y -> (3/2)}
```

---

Minimize $(x+y-5)/ ( 3x+2y+15)$ subject to the constraints $3x+4y\text{==}12$ and $y\geq 2$ :

```wl
In[1]:= LinearFractionalOptimization[(x + y - 5) / (3x + 2y + 15), {3x + 4y == 7, x ≥ 2}, {x, y}]

Out[1]= {x -> 2, y -> (1/4)}
```

---

Minimize $-(x+y+5)/(3x+2y+15)$ subject to $3x+y\leq 6,3x+4y=-2$ and bounds $-1\leq x\leq 1,-2\leq y\leq 4$ :

```wl
In[1]:= LinearFractionalOptimization[-(x + y + 5) / (3x + 2y + 15), {3x + y ≤ 6, 3x + 4y == -2, -1 ≤ x ≤ 1, -2 ≤ y ≤ 4}, {x, y}]

Out[1]= {x -> -1, y -> (1/4)}
```

---

Use ``VectorLessEqual`` to express several ``LessEqual`` inequality constraints at once:

```wl
In[1]:= LinearFractionalOptimization[(2 + x + 2y) / (1 + 50x + 50y), VectorLessEqual[{{3x + 4y, 5x - 6y}, {-2, -20}}], {x, y}]

Out[1]= {x -> -(46/19), y -> (25/19)}
```

Use esc`` v<= ``esc to enter the vector inequality in a compact form:

```wl
In[2]:= LinearFractionalOptimization[(2 + x + 2y) / (1 + 50x + 50y), {{3x + 4y}, {5x - 6y}}\[VectorLessEqual]{-2, -20}, {x, y}]

Out[2]= {x -> -(46/19), y -> (25/19)}
```

An equivalent form using scalar inequalities:

```wl
In[3]:= LinearFractionalOptimization[(2 + x + 2y) / (1 + 50x + 50y), {3x + 4y ≤ -2, 5x - 6y ≤ -20}, {x, y}]

Out[3]= {x -> -(46/19), y -> (25/19)}
```

---

Use ``VectorGreaterEqual`` to express several ``GreaterEqual`` inequality constraints at once:

```wl
In[1]:= LinearFractionalOptimization[(x + y - 5) / (3x + 2y + 15), VectorGreaterEqual[{{3x + 4y, x}, {12, 2}}], {x, y}]

Out[1]= {x -> 2, y -> (3/2)}
```

Use esc`` v>=``esc to enter the vector inequality in a compact form:

```wl
In[2]:= LinearFractionalOptimization[(x + y - 5) / (3x + 2y + 15), {3x + 4y, x}\[VectorGreaterEqual]{12, 2}, {x, y}]

Out[2]= {x -> 2, y -> (3/2)}
```

An equivalent form using scalar inequalities:

```wl
In[3]:= LinearFractionalOptimization[(x + y - 5) / (3x + 2y + 15), {3x + 4y ≥ 12, x ≥ 2}, {x, y}]

Out[3]= {x -> 2, y -> (3/2)}
```

---

Use ``Equal`` to express several equality constraints at once:

```wl
In[1]:= LinearFractionalOptimization[(x - y) / (1 + x + y), {x + y, x - y} == {1 / 2, 1}, {x, y}]

Out[1]= {x -> (3/4), y -> -(1/4)}
```

An equivalent form using several scalar equalities:

```wl
In[2]:= LinearFractionalOptimization[(x - y) / (1 + x + y), {x + y == 1 / 2, x - y == 1}, {x, y}]

Out[2]= {x -> (3/4), y -> -(1/4)}
```

---

Specify constraints using a combination of scalar and vector inequalities:

```wl
In[1]:= LinearFractionalOptimization[-(x + y + 5) / (3x + 2y + 15), {3x + y ≤ 6, 3x + 4y ≤ 12, {-1, -2}\[VectorLessEqual]{x, y}\[VectorLessEqual]{1, 4}}, {x, y}]

Out[1]= {x -> -1, y -> (15/4)}
```

---

Minimize $(x+y-5)/(3x+2y+15)$ subject to $3x+4y\geq 12,x\geq 2$. Use vector variable $v=\{x,y\}$ :

```wl
In[1]:= LinearFractionalOptimization[({1, 1}.v - 5) / ({3, 2}.v + 15), {{3, 4}.v ≥ 12, {1, 0}.v ≥ 2}, v]

Out[1]= {v -> {2, (3/2)}}
```

An equivalent form using scalar variables:

```wl
In[2]:= LinearFractionalOptimization[(x + y - 5) / (3x + 2y + 15), {3x + 4y ≥ 12, x ≥ 2}, {x, y}]

Out[2]= {x -> 2, y -> (3/2)}
```

---

Use vector variables and vector inequalities to specify the problem:

```wl
In[1]:= LinearFractionalOptimization[({1, 1}.v - 5) / ({3, 2}.v + 15), {{3, 4}, {1, 0}}.v\[VectorGreaterEqual]{12, 2}, v]

Out[1]= {v -> {2, (3/2)}}
```

---

Use ``Indexed`` to access components of a vector variable, e.g. Indexed[v, {1}] :

```wl
In[1]:= LinearFractionalOptimization[({1, 1}.v - 5) / ({3, 2}.v + 15), {{3, 4}.v ≥ 12, Indexed[v, 1] ≥ 2}, v]

Out[1]= {v -> {2, (3/2)}}
```

---

Use constant parameter equations to specify the coefficients of the objective and constraints:

```wl
In[1]:=
parameterEqns = {α == {1, 1}, γ == {3, 2}, a1 == {3, 4}, a2 == {1, 0}};
constraints = {a1.{x, y} ≥ 12, a2.{x, y} ≥ 2};

In[2]:= LinearFractionalOptimization[(α.{x, y} - 5) / (γ.{x, y} + 15), {parameterEqns, constraints}, {x, y}]

Out[2]= {x -> 2, y -> (3/2)}
```

---

Use constant parameter equations to specify coefficients of several constraints:

```wl
In[1]:= parameterEqns = {α == {1, 1}, γ == {3, 2}, a == {{3, 4}, {1, 0}}, b == {12, 2}};

In[2]:= LinearFractionalOptimization[(α.{x, y} - 5) / (γ.{x, y} + 15), {parameterEqns, a.{x, y}\[VectorGreaterEqual]b}, {x, y}]

Out[2]= {x -> 2, y -> (3/2)}
```

---

Minimize $\left(\sum _{j=1}^2 x_i-5\right)/\left(3x_1+2x_2+\sum _{j=1}^3 y_i\right)$ subject to $a.x\unicode{f435}b,y\unicode{f435}1, y\in \mathbb{R}^3$ :

```wl
In[1]:= parameterEqns = {γ == {3, 2}, a == {{3, 4}, {1, 0}}, b == {12, 2}};

In[2]:= LinearFractionalOptimization[(Total[x] - 5) / (γ.x + Total[y]), {parameterEqns, a.x\[VectorGreaterEqual]b, y\[VectorGreaterEqual]1}, {x, y}]
```

LinearFractionalOptimization::vardim: The dimensionality of variables {y} could not be determined automatically.  The dimensionality may be given by specifying variables using Element.

```wl
Out[2]= LinearFractionalOptimization[(-5 + Total[x]/γ.x + Total[y]), {{γ == {3, 2}, a == {{3, 4}, {1, 0}}, b == {12, 2}}, a.x\[VectorGreaterEqual]b, y\[VectorGreaterEqual]1}, {x, y}]
```

Use ``Vectors[n]`` to specify the dimension of vector variables when needed:

```wl
In[3]:= LinearFractionalOptimization[(Total[x] - 5) / (γ.x + Total[y]), {parameterEqns, a.x\[VectorGreaterEqual]b, y\[VectorGreaterEqual]1}, {x, y∈Vectors[3]}]

Out[3]= {x -> {2, (3/2)}, y -> {1, 1, 1}}
```

---

Specify non-negative constraints using ``NonNegativeReals`` (NonNegativeReals):

```wl
In[1]:= LinearFractionalOptimization[({5, -3}.v + 2) / ({4, 1}.v + 5), {{-1, -2}, {1, 3}}.v\[VectorGreaterEqual]{-4, 6}, v∈Vectors[2, NonNegativeReals]]

Out[1]= {v -> {0, 2}}
```

An equivalent form using vector inequalities:

```wl
In[2]:= LinearFractionalOptimization[({5, -3}.v + 2) / ({4, 1}.v + 5), {{{-1, -2}, {1, 3}}.v\[VectorGreaterEqual]{-4, 6}, v\[VectorGreaterEqual]0}, v]

Out[2]= {v -> {0, 2}}
```

---

Specify non-positive constraints using ``NonPositiveReals`` (NonPositiveReals):

```wl
In[1]:= LinearFractionalOptimization[({-5, 3}.v + 2) / ({-4, -1}.v + 5), {{1, 2}, {-1, -3}}.v\[VectorGreaterEqual]{-4, 6}, v∈Vectors[2, NonPositiveReals]]

Out[1]= {v -> {0, -2}}
```

An equivalent form using vector inequalities:

```wl
In[2]:= LinearFractionalOptimization[({-5, 3}.v + 2) / ({-4, -1}.v + 5), {{{1, 2}, {-1, -3}}.v\[VectorGreaterEqual]{-4, 6}, v\[VectorLessEqual]0}, v]

Out[2]= {v -> {0, -2}}
```

---

Minimize $(\alpha .x+\beta )/(\gamma .x+\delta )$ subject to $a.x+b\unicode{f435}0,a_{\text{\textit{eq}}}+b_{\text{\textit{eq}}}\unicode{f435}0$ by specifying the scalars, vectors and matrices:

```wl
In[1]:=
{α, β, γ, δ} = {{1, 1}, -5, {3, 2}, 15};
{a, b} = {{{3, 4}}, {-12}};
{aeq, beq} = {{{1, 0}}, {0}};
```

Solve using the matrix-vector form:

```wl
In[2]:= LinearFractionalOptimization[{α, β, γ, δ}, {a, b}, {aeq, beq}]

Out[2]= {0, 3}
```

The equivalent form using variables and inequalities:

```wl
In[3]:= constraints = {Thread[a.{x, y} + b ≥ {0}], Thread[aeq.{x, y} + beq == {0}]}

Out[3]= {{-12 + 3 x + 4 y ≥ 0}, {x == 0}}

In[4]:= LinearFractionalOptimization[(α.{x, y} + β) / (γ.{x, y} + δ), constraints, {x, y}]

Out[4]= {x -> 0, y -> 3}
```

---

Try to find a vector $(x,y)$ that minimizes the function $(3+x+2 y)/(1-5x-5 y)$ subject to the constraints $3x+4y\leq  7,5x-6y\leq  20, -7x+8y\geq 27, x\geq -3$ :

```wl
In[1]:= res = LinearFractionalOptimization[(3 + x + 2y) / (1 - 5x - 5y), {3x + 4y ≤ 7, 5x - 6y ≤ 20, -7x + 8y ≥ 27, x ≥ -3}, {x, y}, {"PrimalMinimumValue", "PrimalMinimizerRules"}]
```

LinearFractionalOptimization::ubndf: The feasible region includes points where the denominator \[Gamma].x + \[Delta] is zero. LinearFractionalOptimization will return a solution on the limiting line  \[Gamma].x + \[Delta] = 0, where the minimizing function is unbounded.

```wl
Out[1]= {-∞, {x -> -3, y -> (16/5)}}
```

The minimum value is unbounded because there is a singularity across the feasible region:

```wl
In[2]:= Plot3D[(3 + x + 2y) / (1 - 5x - 5y), {x, y}∈ImplicitRegion[...]]

Out[2]= [image]
```

#### Integer Variables (4)

Specify integer domain constraints using ``Integers``:

```wl
In[1]:= LinearFractionalOptimization[(x + y - 5) / (3x + 2y + 15), {3x + 4y ≥ 12, x ≥ 2}, {x∈Integers, y∈Integers}]

Out[1]= {x -> 2, y -> 2}
```

---

Specify integer domain constraints on vector variables using ``Vectors[n, Integers]`` :

```wl
In[1]:= LinearFractionalOptimization[(Total[x] - 5) / ({3, 2}.x + y), {{{3, 4}, {1, 0}}.x\[VectorGreaterEqual]{12, 2}, y ≥ 1}, {x∈Vectors[2, Integers], y}]

Out[1]= {x -> {2, 2}, y -> 1.}
```

---

Specify non-negative integer domain constraints using ``NonNegativeIntegers`` (NonNegativeIntegers):

```wl
In[1]:= LinearFractionalOptimization[({5, -3}.v + 2) / ({4, 1}.v + 5), {{-1, -2}, {1, 3}}.v\[VectorGreaterEqual]{-4, 6}, v∈Vectors[2, NonNegativeIntegers]]

Out[1]= {v -> {0, 2}}
```

An equivalent form using vector inequalities:

```wl
In[2]:= LinearFractionalOptimization[({5, -3}.v + 2) / ({4, 1}.v + 5), {{{-1, -2}, {1, 3}}.v\[VectorGreaterEqual]{-4, 6}, v\[VectorGreaterEqual]0}, v∈Vectors[2, Integers]]

Out[2]= {v -> {0, 2}}
```

---

Specify non-positive integer domain constraints using ``NonPositiveIntegers`` (NonPositiveIntegers):

```wl
In[1]:= LinearFractionalOptimization[({-5, 3}.v + 2) / ({-4, -1}.v + 5), {{1, 2}, {-1, -3}}.v\[VectorGreaterEqual]{-4, 6}, v∈Vectors[2, NonPositiveIntegers]]

Out[1]= {v -> {0, -2}}
```

An equivalent form using vector inequalities:

```wl
In[2]:= LinearFractionalOptimization[({-5, 3}.v + 2) / ({-4, -1}.v + 5), {{{1, 2}, {-1, -3}}.v\[VectorGreaterEqual]{-4, 6}, v\[VectorLessEqual]0}, v∈Vectors[2, Integers]]

Out[2]= {v -> {0, -2}}
```

#### Complex Variables (2)

Specify complex variables using ``Complexes`` :

```wl
In[1]:= LinearFractionalOptimization[Re[x] / Im[1 + 3x], {x\[VectorGreaterEqual]1 + I, x\[VectorLessEqual]2 + 2I}, {x∈Complexes}]

Out[1]= {x -> 1 + 2 I}
```

---

Specify the complex domain on vector variables using ``Vectors[2, Complexes]`` :

```wl
In[1]:= LinearFractionalOptimization[Re[{-5, 3}.v + 2] / Re[{-4, -1}.v + 5], {{{1, 2}, {-1, -3}}.v\[VectorGreaterEqual]{-4, 6}, v\[VectorLessEqual]I}, v∈Vectors[2, Complexes]]

Out[1]= {v -> {I, -2 - (I/3)}}
```

#### Primal Model Properties (3)

Minimize $-(x+y+5)/(3x+2y+15)$ subject to $3x+y\leq 6,3x+4y\leq 12,-1\leq x\leq 1,-2\leq y\leq 4$ :

```wl
In[1]:= LinearFractionalOptimization[-(x + y + 5) / (3x + 2y + 15), {3x + y ≤ 6, 3x + 4y ≤ 12, -1 ≤ x ≤ 1, -2 ≤ y ≤ 4}, {x, y}]

Out[1]= {x -> -1, y -> (15/4)}
```

Get the primal minimizer as a vector:

```wl
In[2]:= LinearFractionalOptimization[-(x + y + 5) / (3x + 2y + 15), {3x + y ≤ 6, 3x + 4y ≤ 12, -1 ≤ x ≤ 1, -2 ≤ y ≤ 4}, {x, y}, "PrimalMinimizer"]

Out[2]= {-1, (15/4)}
```

Get the minimal value:

```wl
In[3]:= LinearFractionalOptimization[-(x + y + 5) / (3x + 2y + 15), {3x + y ≤ 6, 3x + 4y ≤ 12, -1 ≤ x ≤ 1, -2 ≤ y ≤ 4}, {x, y}, "PrimalMinimumValue"]

Out[3]= -(31/78)
```

---

Solve an optimization problem using symbolic inputs:

```wl
In[1]:=
obj = (x + y - 5) / (3x + 2y + 15);
cons = {3x + 4y ≥ 12, x == 0};

In[2]:= LinearFractionalOptimization[obj, cons, {x, y}]

Out[2]= {x -> 0, y -> 3}
```

Extract the matrix-vector inputs of the optimization problem:

```wl
In[3]:=
{{α, β, γ, δ}, {a, b}, {Subscript[a, eq], Subscript[b, eq]}} = LinearFractionalOptimization[obj, cons, 
	{x, y}, {"LinearFractionalObjectiveCoefficients", "LinearInequalityConstraints", "LinearEqualityConstraints"}];
```

Recover the symbolic input result using the matrix-vector inputs:

```wl
In[4]:= LinearFractionalOptimization[{α, β, γ, δ}, {a, b}, {Subscript[a, eq], Subscript[b, eq]}]

Out[4]= {0, 3}
```

---

Find the slack associated with a minimization problem:

```wl
In[1]:= obj = -(x + y + 5) / (3x + 2y + 15);cons = {3x + y ≤ 6, 3x + 4y == -2, -1 ≤ x ≤ 1, -2 ≤ y ≤ 4};

In[2]:= {Subscript[s, ineq], Subscript[s, eq]} = LinearFractionalOptimization[obj, cons, {x, y}, "Slack"]

Out[2]= {{(9/4), (35/4), 0, 2, (15/4)}, {0}}
```

Get the primal minimizer and the constraint matrices:

```wl
In[3]:= {x^ * , {a, b}, {Subscript[a, eq], Subscript[b, eq]}} = LinearFractionalOptimization[obj, cons, {x, y}, {...}];
```

The slack for inequality constraints $a.x^*+b\unicode{f435}0$ is a vector $s_{\text{ineq}}$ such that $a.x^*+b-s_{\text{ineq}}==0$ :

```wl
In[4]:= Norm[a.x^ *  + b - Subscript[s, ineq]] == 0

Out[4]= True
```

The slack for the equality constraints $a.x^*+b==0$ is a vector $s_{\text{eq}}$ such that $a.x^*+b-s_{\text{eq}}==0$ :

```wl
In[5]:= Norm[Subscript[a, eq].x^ *  + Subscript[b, eq] - Subscript[s, eq]] == 0

Out[5]= True
```

The equality slack $s_{\text{eq}}$ is typically a zero vector:

```wl
In[6]:= Subscript[s, eq]

Out[6]= {0}
```

#### Dual Model Properties (3)

Minimize $(\alpha .x+\beta )/(\gamma .x+\delta )$ subject to $a.x+b\unicode{f435}0$ and $a_{\text{\textit{eq}}}.x+b_{\text{\textit{eq}}}==0$:

```wl
In[1]:=
{α, β, γ, δ} = {{-1, -1}, -5, {3, 2}, 15};
{a, b} = {{{-3, -1}, {-1, 0}, {0, -1}}, {6, 1, 1}};
{Subscript[a, eq], Subscript[b, eq]} = {{{3, 4}}, {2}};
xv = Table[Indexed[x, i], {i, 2}];

In[2]:= psol = LinearFractionalOptimization[(α.xv + β) / (γ.xv + δ), {a.xv + b\[VectorGreaterEqual]0, Subscript[a, eq].xv + Subscript[b, eq]\[VectorGreaterEqual]0}, xv]

Out[2]= {Indexed[x, {1}] -> -2, Indexed[x, {2}] -> 1}
```

The dual problem is to maximize $\zeta$ subject to $\delta  \zeta +b.\lambda +b_{\text{\textit{eq}}}.\nu \leq \beta ,\zeta  \gamma +a\mathsf{T}.\lambda +a_{\text{\textit{eq}}}\mathsf{T}.\nu ==\alpha
,\lambda \unicode{f435}0$:

```wl
In[3]:= {λv, νv} = {Table[Indexed[λ, i], {i, 3}], Table[Indexed[ν, i], {i, 1}]};

In[4]:=
dsol = LinearOptimization[-ζ, {δ ζ + b.λv + Subscript[b, eq].νv ≤ β, ζ γ + a\[Transpose].λv + Subscript[a, eq]\[Transpose].νv == α, 
	λv\[VectorGreaterEqual]0}, Join[{ζ}, λv, νv]]

Out[4]= {ζ -> -(4/11), Indexed[λ, {1}] -> 0, Indexed[λ, {2}] -> 0, Indexed[λ, {3}] -> (13/33), Indexed[ν, {1}] -> (1/33)}
```

The primal minimum value and the dual maximum value coincide because of strong duality:

```wl
In[5]:= {(α.xv + β) / (γ.xv + δ) /. psol, ζ /. dsol}

Out[5]= {-(4/11), -(4/11)}
```

The duality gap of zero. In general, $\left(\alpha .x^*+\beta \right)/\left(\gamma .x^*+\delta \right)\leq -b.\lambda ^*-b_{\text{\textit{eq}}}.\nu ^*$ at optimal points:

```wl
In[6]:= LinearFractionalOptimization[(α.xv + β) / (γ.xv + δ), {a.xv + b\[VectorGreaterEqual]0, Subscript[a, eq].xv + Subscript[b, eq]\[VectorGreaterEqual]0}, xv, "DualityGap"]

Out[6]= 0
```

---

Construct the dual problem using constraint matrices extracted from the primal problem:

```wl
In[1]:=
obj = -(x + y + 5) / (3x + 2y + 15);
constraints = {3x + y ≤ 6, 3x + 4y == -2, x ≤ 1, y ≤ 1};

In[2]:= LinearFractionalOptimization[obj, constraints, {x, y}]

Out[2]= {x -> -2, y -> 1}
```

Extract the objective vectors and constraint matrices:

```wl
In[3]:= {{α, β, γ, δ}, {a, b}, {Subscript[a, eq], Subscript[b, eq]}} = LinearFractionalOptimization[obj, constraints, {x, y}, {"LinearFractionalObjectiveCoefficients", "LinearInequalityConstraints", "LinearEqualityConstraints"}];
```

The dual problem is to maximize $\zeta$ subject to $\delta  \zeta +b.\lambda +b_{\text{\textit{eq}}}.\nu \leq \beta ,\zeta  \gamma +a\mathsf{T}.\lambda +a_{\text{\textit{eq}}}\mathsf{T}.\nu ==\alpha
,\lambda \unicode{f435}0$. Use ``Inactive[Times]`` to avoid threading in equality constraint:

```wl
In[4]:= LinearOptimization[-ζ, {δ ζ + b.λ + Subscript[b, eq].ν ≤ β, ζ * γ + a\[Transpose].λ + Subscript[a, eq]\[Transpose].ν == α, λ\[VectorGreaterEqual]0}, {ζ, λ, ν}]

Out[4]= {ζ -> -(4/11), λ -> {0, 0, (13/33)}, ν -> {(1/33)}}
```

---

Get the dual maximum value directly using solution properties:

```wl
In[1]:= obj = -(x + y + 5) / (3x + 2y + 15);cons = {3x + y ≤ 6, 3x + 4y == -2, x ≤ 1, y ≤ 1};

In[2]:= LinearFractionalOptimization[obj, cons, {x, y}, "DualMaximumValue"]

Out[2]= -(4/11)
```

Get the dual maximizer directly using solution properties:

```wl
In[3]:= LinearFractionalOptimization[obj, cons, {x, y}, "DualMaximizer"]

Out[3]= {-(4/11), {0, (13/33), 0}, {(1/33)}}
```

#### Sensitivity Properties (4)

Use ``"ConstraintSensitivity"`` to find the change in optimal value due to constraint relaxations:

```wl
In[1]:=
{α, β, γ, δ} = {{1, 1}, -5, {3, 2}, 15};
{a, b} = {{{-3, 4}, {1, 0}, {0, -1}}, {12., 0., 0.}};
p^ *  = LinearFractionalOptimization[{α, β, γ, δ}, {a, b}, "PrimalMinimumValue"]

Out[1]= -0.888889
```

The first vector is inequality sensitivity and the second is equality sensitivity:

```wl
In[2]:= s = LinearFractionalOptimization[{α, β, γ, δ}, {a, b}, "ConstraintSensitivity"]

Out[2]= {{-0.0771605, -0.638889, 0.}, {}}
```

Consider new constraints $a.x+b+\text{$\delta $b}\unicode{f435}0$ where $\text{$\delta $b}$ is the relaxation. The approximate new optimal value is:

```wl
In[3]:=
δb = {0.01, 0.01, 0.01};
p^ *  + δb.s[[1]]

Out[3]= -0.896049
```

Compare directly by solving the relaxed problem:

```wl
In[4]:= LinearFractionalOptimization[{α, β, γ, δ}, {a, b + δb}, "PrimalMinimumValue"]

Out[4]= -0.896089
```

---

Each sensitivity is associated with inequality or equality constraints:

```wl
In[1]:=
obj = (-5 + x + y) / (15 + 3x + 2y + z);
constraints = {3x - 4y ≤ 1 / 2, x ≥ 0, x + y + z == 10, x - z == 40};

In[2]:= {Subscript[s, λ], Subscript[s, ν]} = LinearFractionalOptimization[obj, constraints, {x, y, z}, "ConstraintSensitivity"]

Out[2]= {{0, -(1/825)}, {(64/61875), -(461/123750)}}
```

Extract the constraints:

```wl
In[3]:= {{a, b}, {Subscript[a, eq], Subscript[b, eq]}} = LinearFractionalOptimization[obj, constraints, {x, y, z}, {"LinearInequalityConstraints", "LinearEqualityConstraints"}];
```

The inequality constraints and their associated sensitivity:

```wl
In[4]:= Table[{a[[i]].{x, y, z} + b[[i]] ≥ 0, Subscript[s, λ][[i]]}, {i, 2}]

Out[4]= {{x ≥ 0, 0}, {(1/2) - 3 x + 4 y ≥ 0, -(1/825)}}
```

The equality constraints and their associated sensitivity:

```wl
In[5]:= Table[{Subscript[a, eq][[i]].{x, y, z} + Subscript[b, eq][[i]] == 0, Subscript[s, ν][[i]]}, {i, 2}]

Out[5]= {{-10 + x + y + z == 0, (64/61875)}, {-40 + x - z == 0, -(461/123750)}}
```

---

The change in optimal value due to constraint relaxation is proportional to the sensitivity value:

```wl
In[1]:=
{α, β, γ, δ} = {{1, 1, 0}, -5, {3, 2, 1}, 15};
{a, b} = {{{-3, 4, 0}, {1, 0, 0}}, {1 / 2, 0}};
{Subscript[a, eq], Subscript[b, eq]} = {{{1, 1, 1}, {1, 0, -1}}, {-10, -40}};
```

Compute the minimal value and constraint sensitivity:

```wl
In[2]:= {p^ * , {Subscript[s, λ], Subscript[s, ν]}} = LinearFractionalOptimization[{α, β, γ, δ}, {a, b}, {Subscript[a, eq], Subscript[b, eq]}, {"PrimalMinimumValue", "ConstraintSensitivity"}]

Out[2]= {(589/1650), {{-(1/825), 0}, {(64/61875), -(461/123750)}}}
```

A zero sensitivity will not change the optimal value if the constraint is relaxed:

```wl
In[3]:= Subscript[s, λ][[2]]

Out[3]= 0

In[4]:= p^ *  == LinearFractionalOptimization[{α, β, γ, δ}, {a, b + {0, 1}}, {Subscript[a, eq], Subscript[b, eq]}, "PrimalMinimumValue"]

Out[4]= True
```

A negative sensitivity will decrease the optimal value:

```wl
In[5]:= Subscript[s, λ][[1]]

Out[5]= -(1/825)

In[6]:= p^ *  > LinearFractionalOptimization[{α, β, γ, δ}, {a, b + {1, 0}}, {Subscript[a, eq], Subscript[b, eq]}, "PrimalMinimumValue"]

Out[6]= True
```

A positive sensitivity will increase the optimal value:

```wl
In[7]:= Subscript[s, ν][[1]]

Out[7]= (64/61875)

In[8]:= p^ *  < LinearFractionalOptimization[{α, β, γ, δ}, {a, b}, {Subscript[a, eq], Subscript[b, eq] + {1, 0}}, "PrimalMinimumValue"]

Out[8]= True
```

---

The ``"ConstraintSensitivity"`` is related to the dual maximizer of the problem:

```wl
In[1]:=
obj = (-5 + x + y) / (15 + 3x + 2y + z);
constraints = {3x - 4y ≤ 1 / 2, x ≥ 0, x + y + z == 10, x - z == 40};

In[2]:=
{{Subscript[s, λ], Subscript[s, ν]}, {ζ, λ^ * , ν^ * }} = LinearFractionalOptimization[
	obj, constraints, {x, y, z}, {"ConstraintSensitivity", "DualMaximizer"}]

Out[2]= {{{0, -(1/825)}, {-(461/123750), (64/61875)}}, {(589/1650), {0, (1/11)}, {(461/1650), -(64/825)}}}
```

The inequality sensitivity $s_{\lambda }=\frac{\partial p^*}{\partial b}$ satisfies $s_{\lambda }=-\lambda ^*/\left(\gamma .x^*+\delta \right)$, where $\lambda ^*$ is the dual inequality maximizer:

```wl
In[3]:= {x^ * , y^ * , z^ * } = LinearFractionalOptimization[obj, constraints, {x, y, z}, "PrimalMinimizer"];

In[4]:= Subscript[s, λ] == -λ^ *  / (15 + 3x^ *  + 2y^ *  + z^ * )

Out[4]= True
```

The equality sensitivity $s_{\nu }=\frac{\partial p^*}{\partial b_{\text{\textit{eq}}}}$ satisfies $s_{\nu }=-\nu ^*/\left(\gamma .x^*+\delta \right)$, where $\nu ^*$ is the dual equality maximizer:

```wl
In[5]:= Subscript[s, ν] == -ν^ *  / (15 + 3x^ *  + 2y^ *  + z^ * )

Out[5]= True
```

### Options (8)

#### Method (4)

The default method for ``MachinePrecision`` is ``"CLP"`` :

```wl
In[1]:=
obj = -(x + y + 5) / (3x + 2y + 15);
cons = {3x + y ≤ 6, 3x + 4y ≤ 12, -1 ≤ x ≤ 1, -2 ≤ y ≤ 4};

In[2]:= LinearFractionalOptimization[obj, N[cons], {x, y}, Method -> Automatic]

Out[2]= {x -> -1., y -> 3.75}

In[3]:= LinearFractionalOptimization[obj, N[cons], {x, y}, Method -> "CLP"]

Out[3]= {x -> -1., y -> 3.75}
```

The default method for arbitrary or infinite ``WorkingPrecision`` is ``"Simplex"``:

```wl
In[4]:= LinearFractionalOptimization[obj, cons, {x, y}, Method -> Automatic]

Out[4]= {x -> -1, y -> (15/4)}

In[5]:= LinearFractionalOptimization[obj, cons, {x, y}, Method -> "Simplex"]

Out[5]= {x -> -1, y -> (15/4)}
```

---

All methods work for ``MachinePrecision`` input:

```wl
In[1]:=
obj = -(x + y + 5) / (3x + 2y + 15);
cons = {3x + y ≤ 6, 3x + 4y ≤ 12, -1 ≤ x ≤ 1, -2 ≤ y ≤ 4};

In[2]:= Table[LinearFractionalOptimization[obj, N[cons], {x, y}, Method -> met], {met, {"Simplex", "RevisedSimplex", "InteriorPoint", "CLP"}}]

Out[2]= {{x -> -1., y -> 3.75}, {x -> -1., y -> 3.75}, {x -> -1., y -> 3.75}, {x -> -1., y -> 3.75}}
```

``"Simplex"`` and ``"RevisedSimplex"`` can be used for arbitrary- and infinite-precision inputs:

```wl
In[3]:= Table[LinearFractionalOptimization[obj, N[cons, 20], {x, y}, Method -> met], {met, {"Simplex", "RevisedSimplex"}}]

Out[3]= {{x -> -1.0000000000000000, y -> 3.750000000000000}, {x -> -1.00000000000000000, y -> 3.7500000000000000}}
```

---

Different methods have different strengths, which is typically problem and implementation dependent:

```wl
In[1]:= a = SparseArray[{Band[{1, 1}] -> 1., Band[{1, 2}] -> 2.}, {500, 300}];
```

``"Simplex"`` and ``"RevisedSimplex"`` are good for small dense problems:

```wl
In[2]:=
Timing[LinearFractionalOptimization[Total[x] / (1 + Total[x]), 
	{a.x\[VectorGreaterEqual]-1, x\[VectorGreaterEqual]0}, x, Method -> "Simplex"];]

Out[2]= {1.43235, Null}
```

``"InteriorPoint"`` and ``"CLP"`` are good for large sparse problems:

```wl
In[3]:=
Timing[LinearFractionalOptimization[Total[x] / (1 + Total[x]), 
	{a.x\[VectorGreaterEqual]-1, x\[VectorGreaterEqual]0}, x, Method -> "InteriorPoint"];]

Out[3]= {0.041819, Null}

In[4]:=
Timing[LinearFractionalOptimization[Total[x] / (1 + Total[x]), 
	{a.x\[VectorGreaterEqual]-1, x\[VectorGreaterEqual]0}, x, Method -> "CLP"];]

Out[4]= {0.007081, Null}
```

---

Different methods may give different results for problems where the optimal solution set is not unique:

```wl
In[1]:= ContourPlot[-(x + y) / (1 + x + y), {x, y}∈ImplicitRegion[x + y ≤ 1 && x ≥ 0 && y ≥ 0, {x, y}], ContourLabels -> True, ImageSize -> Small]

Out[1]= [image]
```

``"InteriorPoint"`` may return a solution in the middle of the optimal solution set:

```wl
In[2]:= LinearFractionalOptimization[-(x + y) / (1 + x + y), {x + y ≤ 1, x ≥ 0, y ≥ 0}, {x, y}, Method -> "InteriorPoint"]

Out[2]= {x -> 0.5, y -> 0.5}
```

``"Simplex"`` always returns a solution at a corner of the optimal solution set:

```wl
In[3]:= LinearFractionalOptimization[-(x + y) / (1 + x + y), {x + y ≤ 1, x ≥ 0, y ≥ 0}, {x, y}, Method -> "Simplex"]

Out[3]= {x -> 1, y -> 0}
```

#### Tolerance (2)

A smaller ``Tolerance`` setting gives a more precise result:

```wl
In[1]:=
n = 2000; m = 1000;
a = SparseArray[{Band[{1, 1}] -> 1, Band[{1, 2}] -> 2}, {m, n}];
b = Range[m];
```

Compute minimum value with very small tolerance to serve as reference:

```wl
In[2]:= Subscript[p, ref] = LinearFractionalOptimization[Total[x] / (1 + Total[x]), {a.x\[VectorGreaterEqual]b, x\[VectorGreaterEqual]0}, x, "PrimalMinimumValue", Method -> "InteriorPoint", Tolerance -> 10 ^ -12]

Out[2]= 0.999994
```

Compute the solution using different ``Tolerance`` settings:

```wl
In[3]:=
err = Table[{tol, Abs[Subscript[p, ref] - LinearFractionalOptimization[
	..., Tolerance -> tol]]}, {tol, 10. ^ -Range[7]}]

Out[3]= {{0.1, 0.062148}, {0.01, 0.00033178}, {0.001, 0.00033178}, {0.0001, 1.845448343917866`*^-6}, {0.00001, 1.845448343917866`*^-6}, {1.`*^-6, 1.5098537109459187`*^-7}, {1.`*^-7, 8.46609760074557`*^-9}}
```

Visualize the change in error with respect to tolerance:

```wl
In[4]:= ListLogLogPlot[err, PlotRange -> All, Joined -> True, PlotMarkers -> Automatic]

Out[4]= [image]
```

---

A smaller ``Tolerance`` setting gives a more precise answer but typically takes longer to compute:

```wl
In[1]:=
n = 2000; m = 1000;
a = SparseArray[{Band[{1, 1}] -> 1., Band[{1, 2}] -> 2.}, {m, n}];
b = Range[m];
```

A smaller tolerance takes longer:

```wl
In[2]:= Timing[ptight = LinearFractionalOptimization[Total[x] / (n + Total[x]), {a.x\[VectorGreaterEqual]b, x\[VectorGreaterEqual]0}, x, "PrimalMinimumValue", Tolerance -> 10^-12, Method -> "InteriorPoint"];]

Out[2]= {0.233816, Null}

In[3]:= Timing[ploose = LinearFractionalOptimization[Total[x] / (n + Total[x]), {a.x\[VectorGreaterEqual]b, x\[VectorGreaterEqual]0}, x, "PrimalMinimumValue", Tolerance -> 10^-1, Method -> "InteriorPoint"];]

Out[3]= {0.096922, Null}
```

The tighter tolerance gives a more precise answer:

```wl
In[4]:= {ptight, ploose}

Out[4]= {0.988162, -0.481724}
```

#### WorkingPrecision (2)

Exact input gives exact output:

```wl
In[1]:= LinearFractionalOptimization[(Subscript[x, 1] + Subscript[x, 2] - 5) / (3Subscript[x, 1] + 2Subscript[x, 2] + 15), {3Subscript[x, 1] + 4Subscript[x, 2] ≥ 12, Subscript[x, 1] ≥ 2}, {Subscript[x, 1], Subscript[x, 2]}]

Out[1]= {Subscript[x, 1] -> 2, Subscript[x, 2] -> (3/2)}
```

Use ``MachinePrecision`` to get an inexact result:

```wl
In[2]:= LinearFractionalOptimization[(Subscript[x, 1] + Subscript[x, 2] - 5) / (3Subscript[x, 1] + 2Subscript[x, 2] + 15), {3Subscript[x, 1] + 4Subscript[x, 2] ≥ 12, Subscript[x, 1] ≥ 2}, {Subscript[x, 1], Subscript[x, 2]}, WorkingPrecision -> MachinePrecision]

Out[2]= {Subscript[x, 1] -> 2., Subscript[x, 2] -> 1.5}
```

---

``LinearFractionalOptimization`` can compute results using a higher working precision:

```wl
In[1]:= LinearFractionalOptimization[(Subscript[x, 1] + Subscript[x, 2] - 5) / (3Subscript[x, 1] + 2Subscript[x, 2] + 15), {3Subscript[x, 1] + 4Subscript[x, 2] ≥ 12, Subscript[x, 1] ≥ 2}, {Subscript[x, 1], Subscript[x, 2]}, WorkingPrecision -> 30]

Out[1]= {Subscript[x, 1] -> 2.00000000000000000000000000, Subscript[x, 2] -> 1.50000000000000000000000000}
```

If the precision is less than the precision of the input arguments, a message is issued:

```wl
In[2]:= LinearFractionalOptimization[(Subscript[x, 1] + Subscript[x, 2] - 5.0) / (3Subscript[x, 1] + 2Subscript[x, 2] + 15), {3Subscript[x, 1] + 4Subscript[x, 2] ≥ 12, Subscript[x, 1] ≥ 2}, {Subscript[x, 1], Subscript[x, 2]}, WorkingPrecision -> 30]
```

LinearFractionalOptimization::precw: The precision of the argument function ({{1.,1.},-5.,{3.,2.},15.}) is less than WorkingPrecision (30.\`).

LinearFractionalOptimization::precw: The precision of the argument function ({SparseArray[Automatic,{2,2},0,{1,{{0,1,3},{{1},{1},{2}}},{1.,3.,4.}}],{-2.,-12.}}) is less than WorkingPrecision (30.\`).

```wl
Out[2]= {Subscript[x, 1] -> 2.00000000000000000000000000, Subscript[x, 2] -> 1.50000000000000000000000000}
```

### Applications (16)

#### Basic Modeling Transformations (6)

Maximize $\left(5-x_1-x_2\right)/\left(5+3x_1+2x_2\right)$ subject to $3x_1+2x_2\geq 1,x_1\leq 1$. Solve a maximization problem by negating the objective function:

```wl
In[1]:= LinearFractionalOptimization[-(5 - Subscript[x, 1] - Subscript[x, 2]) / (5 + 3Subscript[x, 1] + 2Subscript[x, 2]), {$$3x_1+2x_2\geq 1$$, $$x_1\leq  1$$}, {Subscript[x, 1], Subscript[x, 2]}]

Out[1]= {Subscript[x, 1] -> 1, Subscript[x, 2] -> -1}
```

Negate the primal minimal value to get the corresponding maximum value:

```wl
In[2]:= -LinearFractionalOptimization[-(5 - Subscript[x, 1] - Subscript[x, 2]) / (5 + 3Subscript[x, 1] + 2Subscript[x, 2]), {$$3x_1+2x_2\geq 1$$, $$x_1\leq  1$$}, {Subscript[x, 1], Subscript[x, 2]}, "PrimalMinimumValue"]

Out[2]= (5/6)
```

---

Minimize $|x+y|/(1+x+3y)$ subject to $2x-y\leq 1,x\geq 0,y\geq 0$. Let $x+y=t$, and convert$|t|$ into a linear function using $|t|=t^++t^-, t=t^+-t^-, t^+\geq 0,t^-\geq 0$ :

```wl
In[1]:= LinearFractionalOptimization[(tp + tm) / (1 + x + 3y), {2x - y ≥ 1, x ≥ 0, y ≥ 0, x + y == tp - tm, tp ≥ 0, tm ≥ 0}, {x, y, tp, tm}]

Out[1]= {x -> (1/2), y -> 0, tp -> (1/2), tm -> 0}
```

The problem can also be converted using $t=|x+y|, -t\leq x+y\leq t$ :

```wl
In[2]:= LinearFractionalOptimization[t / (1 + x + 3y), {2x - y ≥ 1, x ≥ 0, y ≥ 0, -t ≤ x + y ≤ t}, {x, y, t}]

Out[2]= {x -> (1/2), y -> 0, t -> (1/2)}
```

---

Minimize $\left\|\{x,2y\}\|_1/(1+x+3y)\right.$ subject to $2x-y\geq 1,x\geq 0,y\geq 0$. Since $\left\|\{x,2y\}\|_1\right.=|x|+|2y|$, letting $s_1=|x|, s_2=|2y|, -s_1\leq x\leq s_2, -s_2\leq 2y\leq s_2$, the objective is transformed to $\left.\left(s_1+s_2\right)\right/(1+x+3y)$ :

```wl
In[1]:= LinearFractionalOptimization[(s1 + s2) / (1 + x + 3y), {2x - y ≥ 1, x ≥ 0, y ≥ 0, -s1 ≤ x ≤ s1, -s2 ≤ 2y ≤ s2}, {x, y, s1, s2}]

Out[1]= {x -> (1/2), y -> 0, s1 -> (1/2), s2 -> 0}
```

---

Minimize $\left\|\{x,2y\}\|_{\infty }/(1+x+3y)\right.$ subject to $2x-y\geq 1,x\geq 0,y\geq 0$. Since $\left\|\{x,2y\}\|_{\infty }\right.=\max(|x|,|2y|)$, using $s=\left\|\{x,2y\}\|_{\infty }\right.$, $-s\leq x\leq s, -s\leq 2y\leq s$, the objective is transformed to $s/(1+x+3y)$ :

```wl
In[1]:= LinearFractionalOptimization[s / (1 + x + 3y), {2x - y ≥ 1, x ≥ 0, y ≥ 0, -s ≤ x ≤ s, -s ≤ 2y ≤ s}, {x, y, s}]

Out[1]= {x -> (2/3), y -> (1/3), s -> (2/3)}
```

---

Minimize $f((\beta +\alpha .x)/(\delta +\gamma .x))$ subject to $a.x\preceq b$, where $f$ is a nondecreasing function, by instead minimizing $(\beta +\alpha .x)/(\delta +\gamma .x)$. The primal minimizer $x^*$ will remain the same for both problems. Consider minimizing $f(x,y)=((x+y)/(1+x+3y))^2$ subject to $2x-y\leq 1,x\geq 0,y\geq 0, x+y\geq 0$ :

```wl
In[1]:=
g = (x + y) / (1 + x + 3y);
f = Function[{s}, s ^ 2];

In[2]:= {minval, Subscript[X, min]} = LinearFractionalOptimization[g, {2x - y ≥ 1, x ≥ 0, y ≥ 0, x + y ≥ 0}, {x, y}, {"PrimalMinimumValue", "PrimalMinimizerRules"}]

Out[2]= {(1/3), {x -> (1/2), y -> 0}}
```

The true minimum value can be obtained by applying the function $f$ to the primal minimum value:

```wl
In[3]:= f[minval]

Out[3]= (1/9)
```

---

Minimize $f(z)=\surd z, z=(x+y)/(1+x+3y), z\geq 0$, subject to $2x+y\geq 1,-2\leq y\leq 2$ :

```wl
In[1]:=
g = (x + y) / (1 + x + 3y);
f = Function[{s}, Sqrt[s]];
```

Since $z\geq 0$, solve the problem with $x+y\geq 0,x+3y\geq 0$ and $x+y\leq 0,x+3y\leq 0$ :

```wl
In[2]:= res1 = LinearFractionalOptimization[(x + y) / (1 + x + 3y), {2x + y ≥ 1, x + y ≥ 0, 1 + x + 3y ≥ 0, -2 ≤ y ≤ 2}, {x, y}, {"PrimalMinimumValue", "PrimalMinimizerRules"}]

Out[2]= {(3/13), {x -> -(1/2), y -> 2}}

In[3]:= res2 = LinearFractionalOptimization[(x + y) / (1 + x + 3y), {2x + y ≥ 1, x + y ≤ 0, 1 + x + 3y ≤ 0, -2 ≤ y ≤ 2}, {x, y}, {"PrimalMinimumValue", "PrimalMinimizerRules"}]

Out[3]= {0, {x -> 2, y -> -2}}
```

The optimal solution is the transformed minimum of the two solutions:

```wl
In[4]:= First@MinimalBy[{res1, res2}, f[First[#]]&]

Out[4]= {0, {x -> 2, y -> -2}}
```

#### Transportation Problem (1)

Choose a number of units to transport for five goods by minimizing cost and maximizing profit. The cost and profit of transporting one unit of each good are:

```wl
In[1]:=
costPerUnit = {100, 100, 100, 100, 100};
profitPerUnit = {115, 108, 130, 120, 110};
```

The weight of each unit of goods and the maximum units available for transport are:

```wl
In[2]:=
weightPerUnit = {100, 100, 110, 200, 115};
maxUnits = {30, 20, 30, 30, 30};
```

The minimum operating cost of running the truck is \$2000 for the trip. The total transportation cost is:

```wl
In[3]:= cost = 2000 + costPerUnit.x;
```

The truck makes a minimum profit of \$1000 if it rides empty. The total profit is:

```wl
In[4]:= profit = 1000 + profitPerUnit.x;
```

The truck can carry a maximum of 8000 lbs. The constraints on the truck are:

```wl
In[5]:=
wtConstraint = weightPerUnit.x\[VectorLessEqual]8000;
goodsConstraints = 0\[VectorLessEqual]x\[VectorLessEqual]maxUnits;
```

The optimal mix of goods to transport if found by minimizing the ratio of cost to profit:

```wl
In[6]:= res = LinearFractionalOptimization[cost / profit, {wtConstraint, goodsConstraints}, x∈Vectors[5, Integers]]

Out[6]= {x -> {29, 0, 30, 9, 0}}

In[7]:= TableForm[{x /. res}, TableHeadings -> {{"Units"}, Array[Subscript[x, #]&, 5]}]

Out[7]//TableForm=
|         | x1 | x2 | x3 | x4 | x5 |
| :------ | :- | :- | :- | :- | :- |
| "Units" | 29 | 0  | 30 | 9  | 0  |
```

#### Production Planning (1)

Find the mix of four products to manufacture that minimize cost and maximize profit. The cost and profit of manufacturing one unit of each product are:

```wl
In[1]:=
costPerUnit = {8, 5, 5, 8};
profitPerUnit = {10, 10, 10, 10};
```

To manufacture each product, the company needs a combination of five resources, given by:

```wl
In[2]:=
resourcesNeeded = {{5, 1, 0, 4}, {3, 3, 10, 3}, {7, 2, 5, 5}, {0, 5, 8, 0}, {10, 20, 0, 0}};
GraphicsRow[IconizedObject[«Map»], ImageSize -> 450]

Out[2]= [image]
```

The maximum resources available are:

```wl
In[3]:= maxResources = {20, 20, 20, 50, 50};
```

The constraints on the resources and products are:

```wl
In[4]:=
resourceConstraint = resourcesNeeded.x\[VectorLessEqual]maxResources;
productConstraints = x\[VectorGreaterEqual]0;
```

The company has a minimum operating cost of \$100. The total manufacturing cost is:

```wl
In[5]:= totalCost = 100 + costPerUnit.x;
```

The total profit made by selling the products is:

```wl
In[6]:= totalProfit = profitPerUnit.x;
```

To find out the maximum units to manufacture, minimize the ratio of cost to profit:

```wl
In[7]:= res = LinearFractionalOptimization[totalCost / totalProfit, {resourceConstraint, productConstraints}, x∈Vectors[4, Integers]]

Out[7]= {x -> {0, 2, 0, 3}}
```

#### Resource Allocation (1)

Find the amount of electricity a company must send from its three power plants to four cities so as to maximize profit and minimize cost while meeting the cities' peak demands. The cost of transporting the 1 million kWh of electricity from each plant to the various cities is:

```wl
In[1]:=
cost = {{8, 6, 10, 9}, {9, 12, 13, 7}, {14, 9, 16, 5}};
GraphicsRow[IconizedObject[«BarChart»]& /@ Range[3]]

Out[1]= [image]
```

The profit that each power plant generates by selling 1 million kWh to each city is:

```wl
In[2]:=
profit = {{5, 4, 4, 3}, {6, 2, 3, 4}, {10, 5, 6, 2}};
GraphicsRow[IconizedObject[«BarChart»]& /@ Range[3]]

Out[2]= [image]
```

Let $x_{i,j}$ represent the amount of electricity sent by plant $p_i$ to city $c_j$. The total cost of transporting electricity is $\sum _{i=1}\sum _{j=1}\text{cost}_{\text{\textit{ij}}}x_{\text{\textit{ij}}}$ and constructed using ``Inactive[Times]`` :

```wl
In[3]:= totalCost = Total[cost * x, 2];
```

The total profit made by the power company is $\sum _{i=1}\sum _{j=1}p_{\text{\textit{ij}}}x_{\text{\textit{ij}}}$ and constructed using ``Inactive[Times]`` :

```wl
In[4]:= totalProfit = Total[profit * x, 2];
```

The cities have a peak demand of 45, 20, 30, 30 million kWh, respectively:

```wl
In[5]:= peakDemand = Total[x]\[VectorLessEqual]{45, 20, 30, 30};
```

The power plants can supply a minimum of 35, 50 and 40 million kWh of electricity, respectively:

```wl
In[6]:= minSupply = Total[x, {2}]\[VectorGreaterEqual]{35, 50, 40};
```

The plants can only give and not receive power from the cities:

```wl
In[7]:= negativePowerConstraint = x\[VectorGreaterEqual]0;
```

The optimal amount of electricity to send each city for each plant can be found by minimizing the ratio of cost to profit:

```wl
In[8]:= res = LinearFractionalOptimization[totalCost / totalProfit, {peakDemand, minSupply, negativePowerConstraint}, x∈Matrices[{3, 4}]]

Out[8]= {x -> {{0, 5, 30, 0}, {20, 0, 0, 30}, {25, 15, 0, 0}}}
```

The breakdown of electricity supplied is:

```wl
In[9]:= TableForm[(x /. res), TableHeadings -> {{"Plant 1", "Plant 2", "Plant 3"}, {"City 1", "City 2", "City 3", "City 4"}}]

Out[9]//TableForm=
|           | "City 1" | "City 2" | "City 3" | "City 4" |
| :-------- | :------- | :------- | :------- | :------- |
| "Plant 1" | 0        | 5        | 30       | 0        |
| "Plant 2" | 20       | 0        | 0        | 30       |
| "Plant 3" | 25       | 15       | 0        | 0        |
```

#### Blending Problems (1)

Find the mix of old alloys needed to make a new alloy containing at most 60% lead and at most 35% tin while minimizing cost and maximizing profit. The foundry has four existing old alloys containing tin and lead:

```wl
In[1]:=
tinAvailable = {0.6, 0.4, 0.2, 0.3};
leadAvailable = {0.4, 0.6, 0.8, 0.7};
GraphicsRow[IconizedObject[«PieChart»]& /@ Range[2], ImageSize -> 400]

Out[1]= [image]
```

Let $w_i$ be the weight of the old alloy $i$. The new alloy is produced by a combination of the old alloys:

```wl
In[2]:= newAlloy = Total[w];
```

The cost to acquire one pound of each of the four alloys is:

```wl
In[3]:= costPerPound = {240, 180, 160, 210};
```

The cost to produce the new alloy is:

```wl
In[4]:= totalCost = costPerPound.w;
```

The foundry sells the new alloy for \$200. The total profit that the foundry makes is:

```wl
In[5]:= totalProfit = 200 newAlloy;
```

The foundry must produce at least 15 lbs of the new alloy containing at most 60% lead and 35% tin:

```wl
In[6]:=
wtConstraint = newAlloy ≥ 15;
leadConstraint = 0.6newAlloy ≤ leadAvailable.w;
tinConstraint = 0.35newAlloy ≤ tinAvailable.w;
```

The foundry has access to only 12, 15, 16 and 10 lbs of the four old alloys respectively:

```wl
In[7]:=
resourceConstraint = {VectorLessEqual[{w, {12, 15, 16, 10}}], 
	VectorGreaterEqual[{w, 0}]};
```

The optimal mix of the old alloys to use can be found by minimizing the ratio of cost to profit:

```wl
In[8]:= res = LinearFractionalOptimization[totalCost / totalProfit, {wtConstraint, tinConstraint, leadConstraint, resourceConstraint}, w]

Out[8]= {w -> {0., 11.25, 3.75, 0.}}
```

In[9]:= TableForm[{w/.res}, TableHeadings->{{"lbs (x)"},{"Subscript[a, 1]","Subscript[a, 2]","Subscript[a, 3]","Subscript[a, 4]"}}]

Out[9]//TableForm=
\[Null]	Subscript[a, 1]	Subscript[a, 2]	Subscript[a, 3]	Subscript[a, 4]
lbs (x)	0.	11.25	3.75	0.

#### Investment Problems (2)

Find the allocation of a maximum of \$250,000 of capital to purchase two stocks and a bond such that the return on investment is maximized while reducing the cost of purchasing the stocks/bond. Let $x_1,x_2$ be the amount to invest in the two stocks and let $x_3$ be the amount to invest in the bond:

```wl
In[1]:= capitalConstraint = Subscript[x, 1] + Subscript[x, 2] + Subscript[x, 3] ≤ 250000;
```

The amount invested in the utilities stock cannot be more than \$40,000 and pay a 9% dividend:

```wl
In[2]:= utilityStockConstraint = 0 ≤ Subscript[x, 1] ≤ 40000;
```

The amount invested in the bond must be at least \$70,000 and pay 5% interest:

```wl
In[3]:= bondConstraint = Subscript[x, 3] ≥ 70000;
```

The total amount invested in the two stocks must be at least half the total amount invested:

```wl
In[4]:= investmentConstraint = Subscript[x, 1] + Subscript[x, 2] ≥ (Subscript[x, 1] + Subscript[x, 2] + Subscript[x, 3]) / 2;
```

The electronics stock pays a 4% dividend annually. The total return on investment is:

```wl
In[5]:= totalProfit = (0.09Subscript[x, 1] + 0.04Subscript[x, 2] + 0.05Subscript[x, 3]);
```

The cost associated with the investment is:

```wl
In[6]:= totalCost = (Subscript[x, 1] + Subscript[x, 2] + Subscript[x, 3]);
```

The optimal amount to be invested can be found by minimizing the ratio of cost to profit:

```wl
In[7]:= res = LinearFractionalOptimization[totalCost / totalProfit, {capitalConstraint, utilityStockConstraint, bondConstraint, investmentConstraint}, {Subscript[x, 1], Subscript[x, 2], Subscript[x, 3]}]

Out[7]= {Subscript[x, 1] -> 40000., Subscript[x, 2] -> 30000., Subscript[x, 3] -> 70000.}
```

The total amount invested and the annual dividends received from the investments are:

```wl
In[8]:= {totalCost, totalProfit} /. res

Out[8]= {140000., 8300.}
```

---

Find the optimal combination of investments that yields maximum profit while minimizing cost. The net present value and the costs associated with each investment are:

```wl
In[1]:=
value = {16000, 22000, 12000, 8000};
costs = {7000, 5000, 4000, 3000};
```

Let $x$ be a decision variable such that $x_i=1$ if investment $i$ is selected. The objective is to minimize costs while maximizing profits:

```wl
In[2]:=
objective = (costs.x) / (value.x);
binaryConstraint = 0\[VectorLessEqual]x\[VectorLessEqual]1;
```

There is a maximum \$14,000 available for investing:

```wl
In[3]:= investmentConstraint = costs.x ≤ 14000;
```

Solve the maximization problem to get the optimal combination of investments:

```wl
In[4]:=
LinearFractionalOptimization[objective, {investmentConstraint, 
	binaryConstraint}, x∈Vectors[4, Integers]]

Out[4]= {x -> {0, 1, 0, 0}}
```

#### Set-Covering Problems (3)

Find the optimal combination of doctors a hospital ER must keep on call so that the ER can perform a list of procedures. Each doctor can only perform a certain number of procedures:

```wl
In[1]:=
TableForm[procedureChart = (⁠|   |   |   |   |   |   |
| - | - | - | - | - | - |
| 1 | 0 | 0 | 1 | 0 | 0 |
| 1 | 0 | 0 | 0 | 1 | 0 |
| 0 | 1 | 1 | 0 | 0 | 0 |
| 1 | 0 | 0 | 0 | 0 | 1 |
| 0 | 1 | 1 | 0 | 0 | 1 |
| 0 | 1 | 0 | 1 | 0 | 0 |⁠), Rule[...]]

Out[1]//TableForm=
|               | "Doc 1" | "Doc 2" | "Doc 3" | "Doc 4" | "Doc 5" | "Doc 6" |
| :------------ | :------ | :------ | :------ | :------ | :------ | :------ |
| "Procedure 1" | 1       | 0       | 0       | 1       | 0       | 0       |
| "Procedure 2 ...   | 0       |
| "Procedure 4" | 1       | 0       | 0       | 0       | 0       | 1       |
| "Procedure 5" | 0       | 1       | 1       | 0       | 0       | 1       |
| "Procedure 6" | 0       | 1       | 0       | 1       | 0       | 0       |
```

The cost (per hour) for keeping each doctor on call is:

```wl
In[2]:= cost = {3000, 4000, 2000, 2000, 1000, 2000};
```

Let $x$ be a decision vector such that $x_i=1$ if doctor $i$ is on call. The objective is to minimize cost while maximizing the total doctors present in the ER:

```wl
In[3]:= objective = (cost.x) / Total[x];
```

At least one doctor must perform procedure $i$ :

```wl
In[4]:= procedureConstraint = procedureChart.x\[VectorGreaterEqual]1;
```

Find the combination of doctors to keep on call:

```wl
In[5]:= LinearFractionalOptimization[objective, {procedureConstraint, 0\[VectorLessEqual]x\[VectorLessEqual]1.}, x∈Vectors[6, Integers]]

Out[5]= {x -> {0, 0, 1, 1, 1, 1}}
```

---

Find the optimal number of fire stations that a city containing six districts must build such that there is at least one fire station within 15 minutes of each district. The time required to drive between districts is:

```wl
In[1]:=
TableForm[dist = (⁠|    |    |    |    |    |    |
| -- | -- | -- | -- | -- | -- |
| 0  | 10 | 20 | 30 | 30 | 20 |
| 10 | 0  | 25 | 35 | 20 | 10 |
| 20 | 25 | 0  | 15 | 30 | 20 |
| 30 | 35 | 15 | 0  | 15 | 25 |
| 20 | 20 | 30 | 15 | 0  | 14 |
| 20 | 10 | 20 | 25 | 14 | 0  |⁠), Rule[...]]

Out[1]//TableForm=
|       | "D 1" | "D 2" | "D 3" | "D 4" | "D 5" | "D 6" |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| "D 1" | 0     | 10    | 20    | 30    | 30    | 20    |
| "D 2" | 10    | 0     | 25    | 35    | 20    | 10    |
| "D 3" | 20    | 25    | 0     | 15    | 30    | 20    |
| "D 4" | 30    | 35    | 15    | 0     | 15    | 25    |
| "D 5" | 20    | 20    | 30    | 15    | 0     | 14    |
| "D 6" | 20    | 10    | 20    | 25    | 14    | 0     |
```

The cost (in millions of dollars) to build a fire station in each city is:

```wl
In[2]:= cost = {12, 14, 9, 16, 5, 8};
```

Let $x$ be a decision vector such that $x_i=1$ if a fire station is built in district $i$. The objective is to minimize the cost for the largest number of fire stations:

```wl
In[3]:= objective = (cost.x) / Total[x];
```

At least one fire station must be within 15 minutes of each district:

```wl
In[4]:=
decisionMatrix = N@Table[If[dist[[i, j]] ≤ 15, 1, 0], {i, 6}, {j, 6}];
stationConstraint = decisionMatrix.x \[VectorGreaterEqual]1;
```

Find the districts in which a fire station will be built:

```wl
In[5]:=
res = LinearFractionalOptimization[objective, {stationConstraint, 
	0\[VectorLessEqual]x\[VectorLessEqual]1}, x∈Vectors[6, Integers]]

Out[5]= {x -> {1, 0, 1, 0, 1, 1}}
```

The total cost is:

```wl
In[6]:= (cost.x) /. res

Out[6]= 34
```

---

Find the optimal number of storage depots a company needs to build to distribute to six of its retail stores. The company has selected five possible sites. The cost to supply each store from each depot is:

```wl
In[1]:=
TableForm[transportCost = (⁠|    |    |    |    |    |    |
| -- | -- | -- | -- | -- | -- |
| 10 | 15 | 8  | 11 | 10 | 12 |
| 15 | 12 | 11 | 11 | 10 | 10 |
| 12 | 10 | 11 | 12 | 8  | 15 |
| 10 | 8  | 15 | 11 | 12 | 12 |
| 15 | 12 | 12 | 10 | 15 | 8  |⁠), Rule[...]]

Out[1]//TableForm=
|           | "Store 1" | "Store 2" | "Store 3" | "Store 4" | "Store 5" | "Store 6" |
| :-------- | :-------- | :-------- | :-------- | :-------- | :-------- | :-------- |
| "Depot 1" | 10        | 15        | 8         | 11        | 10        | 12 ... " | 12        | 10        | 11        | 12        | 8         | 15        |
| "Depot 4" | 10        | 8         | 15        | 11        | 12        | 12        |
| "Depot 5" | 15        | 12        | 12        | 10        | 15        | 8         |
```

The construction cost (in millions) for each depot is:

```wl
In[2]:= overheadCost = {12, 12, 12, 12, 12};
```

Let $y$ be a decision variable such that $y_i=1$ if depot $i$ is built. Let $x_{i,j}$ represent the fraction of goods that depot $i$ supplies to store $j$. The total cost is:

```wl
In[3]:= totalCost = Total[Inactive[Times][transportCost, x], 2] + overheadCost.y;
```

The company expects to make a profit of \$1 million from each depot by renting out unused space:

```wl
In[4]:= totalProfit = Total[y];
```

Each store must receive all the goods:

```wl
In[5]:= storeConstraint = {Total[x] == 1, x\[VectorGreaterEqual]0};
```

Only the depots that are built can supply the goods to the store:

```wl
In[6]:= depotConstraint = Table[Indexed[x, {i, j}] ≤ Indexed[y, i], {i, 5}, {j, 6}];
```

Find which of the five depots should be constructed:

```wl
In[7]:=
res = LinearFractionalOptimization[N[totalCost / totalProfit], {storeConstraint, depotConstraint, 0\[VectorLessEqual]y\[VectorLessEqual]1}, {x∈Matrices[{5, 6}], 
	y∈Vectors[5, Integers]}]

Out[7]= {x -> {{1., 0., 1., 0., 0., 0.}, {0., 0., 0., 0., 0., 0.}, {0., 0., 0., 0., 1., 0.}, {0., 1., 0., 0., 0., 0.}, {0., 0., 0., 1., 0., 1.}}, y -> {1, 1, 1, 1, 1}}
```

Find the depots that will be built:

```wl
In[8]:= Flatten@Position[y /. res, 1]

Out[8]= {1, 2, 3, 4, 5}
```

The depots that are supplying the retail stores are:

```wl
In[9]:= TableForm[Round[x /. res], Rule[...]]

Out[9]//TableForm=
|           | "Store 1" | "Store 2" | "Store 3" | "Store 4" | "Store 5" | "Store 6" |
| :-------- | :-------- | :-------- | :-------- | :-------- | :-------- | :-------- |
| "Depot 1" | 1         | 0         | 1         | 0         | 0         | 0  ... " | 0         | 0         | 0         | 0         | 1         | 0         |
| "Depot 4" | 0         | 1         | 0         | 0         | 0         | 0         |
| "Depot 5" | 0         | 0         | 0         | 1         | 0         | 1         |
```

#### Traveling Salesman Problem (1)

Find the path that a salesman should take through $n$ cities such that each city is only visited once and that minimizes the distance and maximizes travel cost savings. Generate the locations:

```wl
In[1]:=
n = 15;
pts = BlockRandom[RandomReal[100, {n, 2}]];
```

Let $c_{i,j}$ be the distance between city $i$ and city $j$. Let $x$ be a decision variable such that if $x_{i,j}=1$, the path goes from city $i$ to city $j$ :

```wl
In[2]:=
c = DistanceMatrix[pts, pts];
distanceCovered = Total[Inactive[Times][c, x], 2];
```

The travel budget to go from one city to another is \$15. If two cities are are within 50 miles, the salesman pays \$5, else pays \$10 flat rate. The total savings are:

```wl
In[3]:=
travelSavings = Table[If[c[[i, j]] ≤ 50, 15 - 5, If[i == j, 0, 15 - 10]], {i, n}, {j, n}];
savings = Total[Inactive[Times][travelSavings, x], 2];
```

The objective is to minimize distance while maximizing the savings:

```wl
In[4]:= objective = distanceCovered / savings;
```

The salesman can arrive from exactly one city and can depart to exactly one other city:

```wl
In[5]:= travelConstraint = {Total[x] == 1, Total[x, {2}] == 1};
```

The salesman cannot arrive to one city and depart to the same city:

```wl
In[6]:= cityConstraint = Table[Indexed[x, {i, i}] == 0, {i, n}];
```

The salesman must travel to all the locations in a single tour:

```wl
In[7]:=
singleTourConstraint = Flatten[Table[If[i == j, {}, 
	Indexed[u, i] - Indexed[u, j] + n * Indexed[x, {i, j}] ≤ n - 1], {i, 2, n}, {j, 2, n}]];
```

The decision variable is a binary variable and the dummy variable is $0\leq u_i\leq n-1,i=2,\text{...},n$ :

```wl
In[8]:= variableConstraint = {0\[VectorLessEqual]x\[VectorLessEqual]1, 0\[VectorLessEqual]u\[VectorLessEqual]n - 1};
```

Find the path that minimizes the distance:

```wl
In[9]:=
res = LinearFractionalOptimization[objective, {travelConstraint, 
	cityConstraint, singleTourConstraint, variableConstraint}, {u∈Vectors[n], x∈Matrices[{n, n}, Integers]}] ;
Short[res, 2]

Out[9]//Short= {u -> {0., 14., 13., 5., 0., 6., «3», 9., 1., 2., 12., 11., 3.}, x -> «1»}
```

Extract the path:

```wl
In[10]:= FindCycle[AdjacencyGraph[x /. res], {n}][[1]]

Out[10]= {1\[DirectedEdge]5, 5\[DirectedEdge]11, 11\[DirectedEdge]12, 12\[DirectedEdge]15, 15\[DirectedEdge]8, 8\[DirectedEdge]4, 4\[DirectedEdge]6, 6\[DirectedEdge]7, 7\[DirectedEdge]10, 10\[DirectedEdge]9, 9\[DirectedEdge]14, 14\[DirectedEdge]13, 13\[DirectedEdge]3, 3\[DirectedEdge]2, 2\[DirectedEdge]1}
```

The distance traveled is:

```wl
In[11]:= Activate[distanceCovered /. res]

Out[11]= 350.103
```

The total savings is:

```wl
In[12]:= Activate[savings /. res]

Out[12]= 140
```

### Properties & Relations (5)

``LinearFractionalOptimization`` gives the global minimum of the objective function:

```wl
In[1]:= res = LinearFractionalOptimization[(1 + x + y) / (1 - 3x - 5y), {3x - 3y ≥ 1, x ≥ 1, y ≥ 1}, {x, y}]

Out[1]= {x -> (4/3), y -> 1}

In[2]:= Show[Plot3D[(1 + x + y) / (1 - 3x - 5y), {x, y}∈ImplicitRegion[3x - 3y ≥ 1 && x ≥ 1 && y ≥ 1, {x, y}]], Graphics3D[{PointSize[0.05], Point[{x, y, (1 + x + y) / (1 - 3x - 5y)} /. res]}]]

Out[2]= [image]
```

---

``Minimize`` also gives global exact results for linear fractional optimization problems:

```wl
In[1]:= Minimize[{(1 + x + y) / (1 - 3x - 5y), {3x - 3y ≥ 1, x ≥ 1, y ≥ 1}}, {x, y}]

Out[1]= {-(5/12), {x -> (4/3), y -> 1}}
```

---

``NMinimize`` can be used to obtain inexact results using global methods:

```wl
In[1]:= NMinimize[{(1 + x + y) / (1 - 3x - 5y), {3x - 3y ≥ 1, x ≥ 1, y ≥ 1}}, {x, y}]

Out[1]= {-0.416667, {x -> 1.33333, y -> 1.}}
```

---

``FindMinimum`` can be used to obtain inexact results using local methods:

```wl
In[1]:= FindMinimum[{(1 + x + y) / (1 - 3x - 5y), {3x - 3y ≥ 1, x ≥ 1, y ≥ 1}}, {x, y}]

Out[1]= {-0.416667, {x -> 1.33333, y -> 1.}}
```

---

``LinearOptimization`` is a special case of ``LinearFractionalOptimization`` :

```wl
In[1]:= LinearFractionalOptimization[(1 + x + y), {3x - 3y ≥ 1, x ≥ 1, y ≥ 1}, {x, y}]

Out[1]= {x -> (4/3), y -> 1}

In[2]:= LinearOptimization[(1 + x + y), {3x - 3y ≥ 1, x ≥ 1, y ≥ 1}, {x, y}]

Out[2]= {x -> (4/3), y -> 1}
```

### Possible Issues (6)

Constraints specified using strict inequalities may not be satisfied:

```wl
In[1]:= res = LinearFractionalOptimization[(x + y) / (1 + 3x + 2y), {x + y > 2, x > 0, y > 0}, {x, y}]

Out[1]= {x -> 2, y -> 0}
```

The solution given is $\inf  \{(x+y)/(1+3x+2y)|x+ y>2,x>0,y>0\}$:

```wl
In[2]:= {x + y > 2, x > 0, y > 0} /. res

Out[2]= {False, True, False}
```

---

The minimum value of an empty set or infeasible problem is defined to be $\infty$:

```wl
In[1]:= LinearFractionalOptimization[x / (x + 1), {x ≤ 0, x ≥ 1}, x, "PrimalMinimumValue"]
```

LinearFractionalOptimization::nsolc: There are no points that satisfy the constraints.

```wl
Out[1]= ∞
```

The minimizer is ``Indeterminate``:

```wl
In[2]:= LinearFractionalOptimization[x / (x + 1), {x ≤ 0, x ≥ 1}, x]
```

LinearFractionalOptimization::lpsnf: No solution can be found that satisfies the constraints.

```wl
Out[2]= {x -> Indeterminate}
```

---

The minimum value for an unbounded set or unbounded problem is $-\infty$:

```wl
In[1]:= LinearFractionalOptimization[(1 + x + 2y) / (1 + x + y), x + y ≤ 0, {x, y}, "PrimalMinimumValue"]
```

LinearFractionalOptimization::ubnd: The problem is unbounded.

```wl
Out[1]= -∞
```

The minimizer is ``Indeterminate``:

```wl
In[2]:= LinearFractionalOptimization[(1 + x + 2y) / (1 + x + y), x + y ≤ 0, {x, y}]
```

LinearFractionalOptimization::ubnd: The problem is unbounded.

```wl
Out[2]= {x -> Indeterminate, y -> Indeterminate}
```

---

Dual related solution properties for mixed-integer problems may not be available:

```wl
In[1]:= LinearFractionalOptimization[(x + y) / (1 + x), {x + y == 2.1, 1 ≤ y ≤ 2}, {x∈Integers, y}, {"DualMaximumValue", "DualMaximizer", "DualityGap"}]

Out[1]= {Missing["NotAvailable"], Missing["NotAvailable"], Missing["NotAvailable"]}
```

---

Mixed-integer problems can only be solved in machine precision:

```wl
In[1]:= LinearFractionalOptimization[(x + y) / (1 + x), {x + y == 21 / 10, 1 ≤ y ≤ 2}, {x∈Integers, y}]
```

LinearFractionalOptimization::lpip: Warning: integer linear fractional programming will use a machine-precision approximation of the inputs.

```wl
Out[1]= {x -> 1, y -> 1.1}
```

---

Constraints with complex values need to be specified using vector inequalities:

```wl
In[1]:= LinearFractionalOptimization[1 / Re[1 + 3z], {z\[VectorGreaterEqual]1 + I, z\[VectorLessEqual]2 + 2I}, z∈Complexes]

Out[1]= {z -> 2 + I}
```

Just using ``GreaterEqual`` or ``LessEqual`` will not work:

```wl
In[2]:= LinearFractionalOptimization[1 / Re[1 + 3z], {z ≥ 1 + I, z ≤ 2 + 2I}, z∈Complexes]
```

GreaterEqual::nord: Invalid comparison with 1+I attempted.

LessEqual::nord: Invalid comparison with 2+2 I attempted.

GreaterEqual::nord: Invalid comparison with 1+I attempted.

```wl
Out[2]= LinearFractionalOptimization[(1/1 + 3 Re[z]), {z ≥ 1 + I, z ≤ 2 + 2 I}, z∈ℂ]
```

## See Also

* [`LinearOptimization`](https://reference.wolfram.com/language/ref/LinearOptimization.en.md)
* [`QuadraticOptimization`](https://reference.wolfram.com/language/ref/QuadraticOptimization.en.md)
* [`FindMinimum`](https://reference.wolfram.com/language/ref/FindMinimum.en.md)
* [`NMinimize`](https://reference.wolfram.com/language/ref/NMinimize.en.md)
* [`Minimize`](https://reference.wolfram.com/language/ref/Minimize.en.md)

## Tech Notes

* [Optimization Method Framework](https://reference.wolfram.com/language/OptimizationMethodFramework/tutorial/OptimizationMethodFramework.en.md)

## Related Guides

* [Convex Optimization](https://reference.wolfram.com/language/guide/ConvexOptimization.en.md)
* [`Optimization`](https://reference.wolfram.com/language/guide/Optimization.en.md)
* [Symbolic Vectors, Matrices and Arrays](https://reference.wolfram.com/language/guide/SymbolicArrays.en.md)

## History

* [Introduced in 2019 (12.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn120.en.md) \| [Updated in 2020 (12.1)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn121.en.md) ▪ [2020 (12.2)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn122.en.md)