LeastSquares
✖
LeastSquares

finds an x that solves the linear least-squares problem for the matrix equation m.x==b.
finds an x that solves the linear least-squares problem for the array equation a.x==b.
Details and Options

- LeastSquares[m,b] gives a vector x that minimizes Norm[m.x-b].
- The vector x is uniquely determined by the minimization only if Length[x]==MatrixRank[m].
- The argument b can be a matrix, in which case the least-squares minimization is done independently for each column in b, which is the x that minimizes Norm[m.x-b,"Frobenius"].
- LeastSquares works on both numerical and symbolic matrices, as well as SparseArray objects.
- For an n1×…×nk×m array a and an n1×…×nk×d1×…×dl array b, LeastSquares[a,b] gives an m×d1×…×dl array x, which minimizes Norm[Flatten[a.x-b]].
- The option Methodmethod may be used to specify the method for LeastSquares to use. Possible settings for method include:
-
Automatic choose the method automatically "Direct" use a direct method for dense or sparse matrices "IterativeRefinement" use iterative refinement to get an improved solution for dense matrices "LSQR" use the LSQR iterative method for dense or sparse machine number matrices "Krylov" use an iterative method for sparse machine number matrices
Examples
open allclose allBasic Examples (2)Summary of the most common use cases
Solve a simple least-squares problem:

https://wolfram.com/xid/0d6hs26jtg-vrc

This finds a tuple that minimizes
:

https://wolfram.com/xid/0d6hs26jtg-tf519p

Use LeastSquares to minimize :

https://wolfram.com/xid/0d6hs26jtg-cmu0c9

Compare to general minimization:

https://wolfram.com/xid/0d6hs26jtg-g5rxbt

Note there is no solution to , so
may be regarded as the best approximate solution:

https://wolfram.com/xid/0d6hs26jtg-g81vfi


Scope (12)Survey of the scope of standard use cases
Basic Uses (7)
Find the least squares for a machine-precision matrix:

https://wolfram.com/xid/0d6hs26jtg-wa9z6a

Least squares for a complex matrix:

https://wolfram.com/xid/0d6hs26jtg-dvjkjg

Use LeastSquares for an exact non-square matrix:

https://wolfram.com/xid/0d6hs26jtg-dz63fe

Least squares for an arbitrary-precision matrix:

https://wolfram.com/xid/0d6hs26jtg-lmgkcr

Use LeastSquares with a symbolic matrix:

https://wolfram.com/xid/0d6hs26jtg-qa6g3o

The least squares for a large numerical matrix is computed efficiently:

https://wolfram.com/xid/0d6hs26jtg-yfon

https://wolfram.com/xid/0d6hs26jtg-c3snma

In LeastSquares[m,b], b can be a matrix:

https://wolfram.com/xid/0d6hs26jtg-1cxdcf

https://wolfram.com/xid/0d6hs26jtg-haixzi


https://wolfram.com/xid/0d6hs26jtg-b468ce

Each column in the result equals the solution found by using the corresponding column in b as input:

https://wolfram.com/xid/0d6hs26jtg-z62dde


https://wolfram.com/xid/0d6hs26jtg-66b8sk

Special Matrices (4)
Solve a least-squares problem for a sparse matrix:

https://wolfram.com/xid/0d6hs26jtg-gldyll


https://wolfram.com/xid/0d6hs26jtg-5bakr9


https://wolfram.com/xid/0d6hs26jtg-ldhm86


https://wolfram.com/xid/0d6hs26jtg-dcs2zx

Solve the least-squares problem with structured matrices:

https://wolfram.com/xid/0d6hs26jtg-n0brac


https://wolfram.com/xid/0d6hs26jtg-e9k94d

Use a different type of matrix structure:

https://wolfram.com/xid/0d6hs26jtg-d2yyup


https://wolfram.com/xid/0d6hs26jtg-q2mqfy


https://wolfram.com/xid/0d6hs26jtg-ymu7gy

LeastSquares[IdentityMatrix[n],b] gives the vector :

https://wolfram.com/xid/0d6hs26jtg-7oz5t

Least squares of HilbertMatrix:

https://wolfram.com/xid/0d6hs26jtg-drbhsj

https://wolfram.com/xid/0d6hs26jtg-bslcfy

Options (1)Common values & functionality for each option
Tolerance (1)
m is a 20×20 Hilbert matrix, and b is a vector such that the solution of m.x==b is known:

https://wolfram.com/xid/0d6hs26jtg-b49opb
With the default tolerance, numerical roundoff is limited, so errors are distributed:

https://wolfram.com/xid/0d6hs26jtg-cu33bh

With Tolerance->0, numerical roundoff can introduce excessive error:

https://wolfram.com/xid/0d6hs26jtg-c1y4nq

Specifying a higher tolerance will limit roundoff errors at the expense of a larger residual:

https://wolfram.com/xid/0d6hs26jtg-c592f9


https://wolfram.com/xid/0d6hs26jtg-df9xlq

Applications (9)Sample problems that can be solved with this function
Geometry of Least Squares (4)
LeastSquares[m,b] can be understood as finding the solution to , where
is the orthogonal projection of
onto the column space of
. Consider the following
and
:

https://wolfram.com/xid/0d6hs26jtg-u38dy5
Find an orthonormal basis for the space spanned by the columns of :

https://wolfram.com/xid/0d6hs26jtg-08jzaj

Compute the orthogonal projection of
onto the spaced spanned by the
:

https://wolfram.com/xid/0d6hs26jtg-ba4m4k

Visualize , its projections
onto the
, and
:

https://wolfram.com/xid/0d6hs26jtg-jtf0rc


https://wolfram.com/xid/0d6hs26jtg-oc1jie

This is the same result as given by LeastSquares:

https://wolfram.com/xid/0d6hs26jtg-qxnruu

Compare and explain the answers returned by LeastSquares[m,b] and LinearSolve[m,b⟂] for the following and
:

https://wolfram.com/xid/0d6hs26jtg-u4jun1
Find an orthonormal basis for the space spanned by the columns of :

https://wolfram.com/xid/0d6hs26jtg-9hxpti

A zero vector is returned because the rank of the matrix is less than the number of columns:

https://wolfram.com/xid/0d6hs26jtg-2zwvvj

Compute the orthogonal projection of
onto the spaced spanned by the
:

https://wolfram.com/xid/0d6hs26jtg-lhseui


https://wolfram.com/xid/0d6hs26jtg-k07pbj

Find the solution returned by LeastSquares:

https://wolfram.com/xid/0d6hs26jtg-yn3nnw

While x and xPerp are different, both solve the least-squares problem because m.x==m.xPerp:

https://wolfram.com/xid/0d6hs26jtg-0erd5i

The two solutions differ by an element of NullSpace[m]:

https://wolfram.com/xid/0d6hs26jtg-httg1e


https://wolfram.com/xid/0d6hs26jtg-gj63l1

Use the matrix projection operators for a matrix with linearly independent columns to find LeastSquares[m,b] for the following and
:

https://wolfram.com/xid/0d6hs26jtg-e0tq9j
The projection operator on the column space of is
:

https://wolfram.com/xid/0d6hs26jtg-shsagp

The solution to the least-squares problem is then the unique solution to :

https://wolfram.com/xid/0d6hs26jtg-hxfodv

Confirm using LeastSquares:

https://wolfram.com/xid/0d6hs26jtg-xa3bow

Compare the solutions found using LeastSquares[m,b] and LinearSolve together with the normal equations of and for the following
and
:

https://wolfram.com/xid/0d6hs26jtg-1gwbpy
Solve using LeastSquares:

https://wolfram.com/xid/0d6hs26jtg-0r9m9i

Solve using LinearSolve and the normal equations :

https://wolfram.com/xid/0d6hs26jtg-zmen3b

While x and xNormal are different, both solve the least-squares problem because m.x==m.xNormal:

https://wolfram.com/xid/0d6hs26jtg-e06k5z

The two solutions differ by an element of NullSpace[m]:

https://wolfram.com/xid/0d6hs26jtg-zgiidc


https://wolfram.com/xid/0d6hs26jtg-wdga0u

Curve and Parameter Fitting (5)
LeastSquares can be used to find a best-fit curve to data. Consider the following data:

https://wolfram.com/xid/0d6hs26jtg-v42zji

Extract the and
coordinates from the data:

https://wolfram.com/xid/0d6hs26jtg-i4v6jo
Let have the columns
and
, so that minimizing
will be fitting to a line
:

https://wolfram.com/xid/0d6hs26jtg-pkve26
Get the coefficients and
for a linear least‐squares fit:

https://wolfram.com/xid/0d6hs26jtg-7p26fs

Verify the coefficients using Fit:

https://wolfram.com/xid/0d6hs26jtg-480xbe

Plot the best-fit curve along with the data:

https://wolfram.com/xid/0d6hs26jtg-wzqna8

Find the best-fit parabola to the following data:

https://wolfram.com/xid/0d6hs26jtg-x7pawr

Extract the and
coordinates from the data:

https://wolfram.com/xid/0d6hs26jtg-w5dwto
Let have the columns
,
and
, so that minimizing
will be fitting to
:

https://wolfram.com/xid/0d6hs26jtg-bmqc8e
Get the coefficients ,
and
for a least‐squares fit:

https://wolfram.com/xid/0d6hs26jtg-z5bw8q

Verify the coefficients using Fit:

https://wolfram.com/xid/0d6hs26jtg-pp5xfy

Plot the best-fit curve along with the data:

https://wolfram.com/xid/0d6hs26jtg-04d0qo

A healthy child’s systolic blood pressure (in millimeters of mercury) and weight
(in pounds) are approximately related by the equation
. Use the following experimental data points
to estimate the systolic blood pressure of a healthy child weighing 100 pounds:

https://wolfram.com/xid/0d6hs26jtg-rhp8hr
Use DesignMatrix to construct the matrix with columns and
:

https://wolfram.com/xid/0d6hs26jtg-sej9as

Extract the values from the data:

https://wolfram.com/xid/0d6hs26jtg-7eslu4

The least-squares solution of :

https://wolfram.com/xid/0d6hs26jtg-lagalm

Substitute the parameters into the model:

https://wolfram.com/xid/0d6hs26jtg-2pyaqn

Then the expected blood pressure of a child weighing 100 pounds is roughly :

https://wolfram.com/xid/0d6hs26jtg-ucdp2h

Visualize the best-fit curve and the data:

https://wolfram.com/xid/0d6hs26jtg-w67bb

According to Kepler’s first law, a comet's orbit satisfies , where
is a constant and
is the eccentricity. The eccentricity determines the type of orbit, with
for an ellipse,
for a parabola, and
for a hyperbola. Use the following observational data to determine the type of orbit of the comet and predict its distance from the Sun at
:

https://wolfram.com/xid/0d6hs26jtg-8gp4rg
To find and
, first use DesignMatrix to create the matrix whose columns are
and
:

https://wolfram.com/xid/0d6hs26jtg-iqety0

Use LeastSquares to find the and
that minimize the error in
from the design matrix:

https://wolfram.com/xid/0d6hs26jtg-qk0a9q

Since , the orbit is elliptical and there is a unique value of
for each value of
:

https://wolfram.com/xid/0d6hs26jtg-evfsdb


https://wolfram.com/xid/0d6hs26jtg-2c9qoz

Evaluating the function at gives an expected distance of roughly
:

https://wolfram.com/xid/0d6hs26jtg-y9lbsk


https://wolfram.com/xid/0d6hs26jtg-blwk4r

Extract the and
coordinates from the data:

https://wolfram.com/xid/0d6hs26jtg-edzzhy
Define cubic basis functions centered at t with support on the interval [t-2,t+2]:

https://wolfram.com/xid/0d6hs26jtg-b2te9

https://wolfram.com/xid/0d6hs26jtg-m49rz4

Set up a sparse design matrix for basis functions centered at 0, 1, ..., 10:

https://wolfram.com/xid/0d6hs26jtg-d7c2i8

Solve the least-squares problem:

https://wolfram.com/xid/0d6hs26jtg-ks2a15

Visualize the data with the best-fit piecewise cubic, which is :

https://wolfram.com/xid/0d6hs26jtg-cpmm0w

Properties & Relations (12)Properties of the function, and connections to other functions
If m.x==b can be solved, LeastSquares is equivalent to LinearSolve:

https://wolfram.com/xid/0d6hs26jtg-5pxvb

https://wolfram.com/xid/0d6hs26jtg-inhso6

If x=LeastSquares[m,b] and n lies in NullSpace[m], x+n is also a least-squares solution:

https://wolfram.com/xid/0d6hs26jtg-f1ggfa

LeastSquares[m,b] solves , with
the orthogonal projection onto the columns of
:

https://wolfram.com/xid/0d6hs26jtg-dutipv

Equality was guaranteed because this particular matrix has a trivial null space:

https://wolfram.com/xid/0d6hs26jtg-sy6180

If is real valued, x=LeastSquares[m,b] obeys the normal equations
:

https://wolfram.com/xid/0d6hs26jtg-b4pjjt

For a complex-valued matrix, the equations are :

https://wolfram.com/xid/0d6hs26jtg-fs43sf

Given x==LeastSquares[m,b], m.x-b lies in NullSpace[ConjugateTranspose[m]]:

https://wolfram.com/xid/0d6hs26jtg-y614cu

The null space is two dimensional:

https://wolfram.com/xid/0d6hs26jtg-erkgo9

m.ls-b lies in the span for the two vectors, as expected:

https://wolfram.com/xid/0d6hs26jtg-twypky

LeastSquares and PseudoInverse can both be used to solve the least-squares problem:

https://wolfram.com/xid/0d6hs26jtg-h3z3e9

https://wolfram.com/xid/0d6hs26jtg-e5dgz8

LeastSquares and QRDecomposition can both be used to solve the least-squares problem:

https://wolfram.com/xid/0d6hs26jtg-rnilv3

https://wolfram.com/xid/0d6hs26jtg-tq1k3w

Let m be a matrix with an empty nullspace:

https://wolfram.com/xid/0d6hs26jtg-c6zrbw

https://wolfram.com/xid/0d6hs26jtg-50m78w

For a vector b, LeastSquares[m,b] is equivalent to ArgMin[Norm[m.x-b],x]:

https://wolfram.com/xid/0d6hs26jtg-f096gq

https://wolfram.com/xid/0d6hs26jtg-fvs5bz

It is also equivalent to ArgMin[Norm[m.x-b,"Frobenius"],x]:

https://wolfram.com/xid/0d6hs26jtg-fxc5e

Let m be a matrix with an empty nullspace:

https://wolfram.com/xid/0d6hs26jtg-3k8hw8

https://wolfram.com/xid/0d6hs26jtg-59ec5g

For a matrix b, LeastSquares is equivalent to ArgMin[Norm[m.x-b,"Frobenius"],x]:

https://wolfram.com/xid/0d6hs26jtg-beu7hq

https://wolfram.com/xid/0d6hs26jtg-f3t3jn


https://wolfram.com/xid/0d6hs26jtg-ks2ymj

If b is a matrix, each column in LeastSquares[m,b] is the result for the corresponding column in b:

https://wolfram.com/xid/0d6hs26jtg-5hlsa1

https://wolfram.com/xid/0d6hs26jtg-s27w2o

m is a 5×2 matrix, and b is a length-5 vector:

https://wolfram.com/xid/0d6hs26jtg-elv4v
Solve the least-squares problem:

https://wolfram.com/xid/0d6hs26jtg-fl09xr


https://wolfram.com/xid/0d6hs26jtg-mfo79

It is also gives the coefficients for the line with least-squares distance to the points:

https://wolfram.com/xid/0d6hs26jtg-cqisli

LeastSquares gives the parameter estimates for a linear model with normal errors:

https://wolfram.com/xid/0d6hs26jtg-fd940x


https://wolfram.com/xid/0d6hs26jtg-hceqiu

LinearModelFit fits the model and gives additional information about the fitting:

https://wolfram.com/xid/0d6hs26jtg-ely944


https://wolfram.com/xid/0d6hs26jtg-cotvx4


https://wolfram.com/xid/0d6hs26jtg-n8ejk4

Wolfram Research (2007), LeastSquares, Wolfram Language function, https://reference.wolfram.com/language/ref/LeastSquares.html (updated 2024).
Text
Wolfram Research (2007), LeastSquares, Wolfram Language function, https://reference.wolfram.com/language/ref/LeastSquares.html (updated 2024).
Wolfram Research (2007), LeastSquares, Wolfram Language function, https://reference.wolfram.com/language/ref/LeastSquares.html (updated 2024).
CMS
Wolfram Language. 2007. "LeastSquares." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2024. https://reference.wolfram.com/language/ref/LeastSquares.html.
Wolfram Language. 2007. "LeastSquares." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2024. https://reference.wolfram.com/language/ref/LeastSquares.html.
APA
Wolfram Language. (2007). LeastSquares. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LeastSquares.html
Wolfram Language. (2007). LeastSquares. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LeastSquares.html
BibTeX
@misc{reference.wolfram_2025_leastsquares, author="Wolfram Research", title="{LeastSquares}", year="2024", howpublished="\url{https://reference.wolfram.com/language/ref/LeastSquares.html}", note=[Accessed: 27-March-2025
]}
BibLaTeX
@online{reference.wolfram_2025_leastsquares, organization={Wolfram Research}, title={LeastSquares}, year={2024}, url={https://reference.wolfram.com/language/ref/LeastSquares.html}, note=[Accessed: 27-March-2025
]}