As of Version 7.0, linear regression functionality is built into the Wolfram Language. »

Linear Regression Package

The built-in function Fit finds a least-squares fit to a list of data as a linear combination of the specified basis functions. The functions Regress and DesignedRegress provided in this package augment Fit by giving a list of commonly required diagnostics such as the coefficient of determination RSquared, the analysis of variance table ANOVATable, and the mean squared error EstimatedVariance. The output of regression functions can be controlled so that only needed information is produced. The Nonlinear Regression Package provides analogous functionality for nonlinear models.

The basis functions fj specify the predictors as functions of the independent variables. The resulting model for the response variable is yi=β1f1i+β2f2i++βpfpi+ei, where yi is the i th response, fji is the j th basis function evaluated at the i th observation, and ei is the i th residual error.

Estimates of the coefficients β1,,βp are calculated to minimize , the error or residual sum of squares. For example, simple linear regression is accomplished by defining the basis functions as f1=1 and f2=x, in which case β1 and β2 are found to minimize [yi-(β1+β2xi)]2.

Regress[data,{1,x,x2},x]fit a list of data points data to a quadratic model
Regress[data,{1,x1,x2,x1x2},{x1,x2}]
fit data to a model that includes interaction between independent variables x1 and x2
Regress[data,{f1,f2},vars]fit data to a model as a linear combination of the functions fi of variables vars

Using Regress.

The arguments of Regress are of the same form as those of Fit. The data can be a list of vectors, each vector consisting of the observed values of the independent variables and the associated response. The basis functions fj must be functions of the symbols given as variables. These symbols correspond to the independent variables represented in the data. By default, a constant function fj=1 is added to the list of basis functions if not explicitly given in the list of basis functions.

The data can also be a vector of data points. In this case, Regress assumes that this vector represents the values of a response variable with the independent variable having values 1, 2, .

{y1,y2,}data points specified by a list of response values, where a single independent variable is assumed to take the values 1, 2,
{{x11,x12,,y1},{x21,x22,,y2}}data points specified by a matrix, where xik is the value of the i th case of the k th independent variable, and yi is the i th response

Ways of specifying data in Regress.

This loads the package.
This data contains ordered pairs of a single predictor and a response.
This is a plot of the data.
This is the output for fitting the model yi=β0+β1+ei.
You can use Fit if you want only the fitted function.
option name
default value
IncludeConstantTrueconstant automatically included in model
RegressionReportSummaryReportfit diagnostics to include
WeightsAutomaticlist of weights for each point or pure function
BasisNamesAutomaticnames of basis elements for table headings

Options for Regress.

Two of the options of Regress influence the method of calculation. IncludeConstant has a default setting True, which causes a constant term to be added to the model even if it is not specified in the basis functions. To fit a model without this constant term, specify IncludeConstant->False and do not include a constant in the basis functions.

The Weights option allows you to implement weighted least squares by specifying a list of weights, one for each data point; the default Weights->Automatic implies a weight of unity for each data point. When Weights->{w1,,wn}, the parameter estimates are chosen to minimize the weighted sum of squared residuals wi .

Weights can also specify a pure function of the response. For example, to choose parameter estimates to minimize , set Weights->(Sqrt[#]&).

The options RegressionReport and BasisNames affect the form and content of the output. If RegressionReport is not specified, Regress automatically gives a list including values for ParameterTable, RSquared, AdjustedRSquared, EstimatedVariance and ANOVATable. This set of objects comprises the default SummaryReport. The option RegressionReport can be used to specify a single object or a list of objects so that more (or less) than the default set of results is included in the output. RegressionReportValues[Regress] gives the objects that may be included in the RegressionReport list for the Regress function.

With the option BasisNames, you can label the headings of predictors in tables such as ParameterTable and ParameterCITable.

The regression functions will also accept any option that can be specified for SingularValueList or StudentTCI. In particular, the numerical tolerance for the internal singular value decomposition is specified using Tolerance, and the confidence level for hypothesis testing and confidence intervals is specified using ConfidenceLevel.

BestFitbest fit function
BestFitParametersbest fit parameter estimates
ANOVATableanalysis of variance table
EstimatedVarianceestimated error variance
ParameterTabletable of parameter information including standard errors and test statistics
ParameterCITabletable of confidence intervals for the parameters
ParameterConfidenceRegionellipsoidal joint confidence region for the parameters
ParameterConfidenceRegion[{fi1,fi2,}]
ellipsoidal conditional joint confidence region for the parameters {fi1,fi2,}
FitResidualsdifferences between the observed responses and the predicted responses
PredictedResponsefitted values obtained by evaluating the best fit function at the observed values of the independent variables
SinglePredictionCITabletable of confidence intervals for predicting a single observation of the response variable
MeanPredictionCITabletable of confidence intervals for predicting the expected value of the response variable
RSquaredcoefficient of determination
AdjustedRSquaredadjusted coefficient of determination
CoefficientOfVariationcoefficient of variation
CovarianceMatrixcovariance matrix of the parameters
CorrelationMatrixcorrelation matrix of the parameters

Some RegressionReport values.

ANOVATable, a table for analysis of variance, provides a comparison of the given model to a smaller one including only a constant term. If IncludeConstant->False is specified, then the smaller model is reduced to the data. The table includes the degrees of freedom, the sum of squares and the mean squares due to the model (in the row labeled Model) and due to the residuals (in the row labeled Error). The residual mean square is also available in EstimatedVariance, and is calculated by dividing the residual sum of squares by its degrees of freedom. The F-test compares the two models using the ratio of their mean squares. If the value of F is large, the null hypothesis supporting the smaller model is rejected.

To evaluate the importance of each basis function, you can get information about the parameter estimates from the parameter table obtained by including ParameterTable in the list specified by RegressionReport. This table includes the estimates, their standard errors, and t-statistics for testing whether each parameter is zero. The p-values are calculated by comparing the obtained statistic to the t distribution with n-p degrees of freedom, where n is the sample size and p is the number of predictors. Confidence intervals for the parameter estimates, also based on the t distribution, can be found by specifying ParameterCITable. ParameterConfidenceRegion specifies the ellipsoidal joint confidence region of all fit parameters. ParameterConfidenceRegion[{fi1,fi2,}] specifies the joint conditional confidence region of the fit parameters associated with basis functions {fi1,fi2,}, a subset of the complete set of basis functions.

The square of the multiple correlation coefficient is called the coefficient of determination R2, and is given by the ratio of the model sum of squares to the total sum of squares. It is a summary statistic that describes the relationship between the predictors and the response variable. AdjustedRSquared is defined as =1-()(1-R2), and gives an adjusted value that you can use to compare subsequent subsets of models. The coefficient of variation is given by the ratio of the residual root mean square to the mean of the response variable. If the response is strictly positive, this is sometimes used to measure the relative magnitude of error variation.

Each row in MeanPredictionCITable gives the confidence interval for the mean response at each of the values of the independent variables. Each row in SinglePredictionCITable gives the confidence interval for a single observed response at each of the values of the independent variables. MeanPredictionCITable gives a region likely to contain the regression curve, while SinglePredictionCITable gives a region likely to contain all possible observations.

The following gives the residuals, the confidence interval table for the predicted response of single observations, and the parameter joint confidence region.
This is a list of the residuals extracted from the output.
The observed response, the predicted response, the standard errors of the predicted response, and the confidence intervals may also be extracted.
This plots the predicted responses against the residuals.
Here the predicted responses and lower and upper confidence limits are paired with the corresponding x values.
This displays the raw data, fitted curve, and the 95% confidence intervals for the predicted responses of single observations.
Graphics may be used to display an Ellipsoid object. This is the joint 95% confidence region for the regression parameters.

This package provides numerous diagnostics for evaluating the data and the fit. The HatDiagonal gives the leverage of each point, measuring whether each observation of the independent variables is unusual. CookD and PredictedResponseDelta are influence diagnostics, simultaneously measuring whether the independent variables and the response variable are unusual. Unfortunately, these diagnostics are primarily useful in detecting single outliers. In particular, the diagnostics may indicate a single outlier, but deleting that observation and recomputing the diagnostics may indicate others. All these diagnostics are subject to this masking effect.

HatDiagonaldiagonal of the hat matrix X(XTX)-1XT, where X is the n by p (weighted) design matrix
JackknifedVariance{v1,,vn}, where vi is the estimated error variance computed using the data with the i th case deleted
StandardizedResidualsfit residuals scaled by their standard errors, computed using the estimated error variance
StudentizedResidualsfit residuals scaled by their standard errors, computed using the jackknifed estimated error variances
CookD{d1,,dn}, where di is Cooks squared distance diagnostic for evaluating whether the i th case is an outlier
PredictedResponseDelta{d1,,dn}, where di is Kuh and Welschs DFFITS diagnostic giving the standardized signed difference in the i th predicted response, between using all the data and the data with the i th case deleted
BestFitParametersDelta{{d11,,d1p},,{dn1,,dnp}}, where dij is Kuh and Welschs DFBETAS diagnostic giving the standardized signed difference in the j th parameter estimate, between using all the data and the data with the i th case deleted
CovarianceMatrixDetRatio{r1,,rn}, where ri is Kuh and Welschs COVRATIO diagnostic giving the ratio of the determinant of the parameter covariance matrix computed using the data with the i th case deleted, to the determinant of the parameter covariance matrix computed using the original data

Diagnostics for detecting outliers.

Some diagnostics indicate the degree to which individual basis functions contribute to the fit, or whether the basis functions are involved in a collinear relationship. The sum of the elements in the SequentialSumOfSquares vector gives the model sum of squares listed in the ANOVATable. Each element corresponds to the increment in the model sum of squares obtained by sequentially adding each nonconstant basis function to the model. Each element in the PartialSumOfSquares vector gives the increase in the model sum of squares due to adding the corresponding nonconstant basis function to a model consisting of all other basis functions. SequentialSumOfSquares is useful in determining the degree of a univariate polynomial model, while PartialSumOfSquares is useful in trimming a large set of predictors. VarianceInflation or EigenstructureTable may also be used for predictor set trimming.

PartialSumOfSquaresa list giving the increase in the model sum of squares due to adding each nonconstant basis function to the model consisting of the remaining basis functions
SequentialSumOfSquaresa list giving a partitioning of the model sum of squares, one element for each nonconstant basis function added sequentially to the model
VarianceInflation{v1,,vp}, where vj is the variance inflation factor associated with the j th parameter
EigenstructureTabletable giving the eigenstructure of the correlation matrix of the nonconstant basis functions

Diagnostics for evaluating basis functions and detecting collinearity.

The DurbinWatson d statistic is used for testing the existence of a first-order autoregressive process. The statistic takes on values between 0 and 4, with values near the middle of that range indicating uncorrelated errors, an underlying assumption of the regression model. Critical values for the statistic vary with sample size, the number of parameters in the model, and the desired significance. These values can be found in published tables.

DurbinWatsonDDurbinWatson d statistic

Correlated errors diagnostic.

Other statistics not mentioned here can be computed with the help of the catcher matrix. This matrix catches all the information the predictors have about the parameter vector. This matrix can be exported from Regress by specifying CatcherMatrix with the RegressionReport option.

CatcherMatrixp×n matrix C, where C·y is the estimated parameter vector and y is the response vector

Matrix describing the parameter information provided by the predictors.

Frequently, linear regression is applied to an existing design matrix rather than the original data. A design matrix is a list containing the basis functions evaluated at the observed values of the independent variable. If your data is already in the form of a design matrix with a corresponding vector of response data, you can use DesignedRegress for the same analyses as provided by Regress. DesignMatrix puts your data in the form of a design matrix.

DesignedRegress[designmatrix,response]fit the model represented by designmatrix given the vector response of response data
DesignMatrix[data,{f1,f2},vars]give the design matrix for modeling data as a linear combination of the functions fi of variables vars

Functions for linear regression using a design matrix.

DesignMatrix takes the same arguments as Regress. It can be used to get the necessary arguments for DesignedRegress, or to check whether you correctly specified your basis functions. When you use DesignMatrix, the constant term is always included in the model unless IncludeConstant->False is specified. Every option of Regress except IncludeConstant is accepted by DesignedRegress. RegressionReportValues[DesignedRegress] gives the values that may be included in the RegressionReport list for the DesignedRegress function.

This is the design matrix used in the previous regression analysis.
Here is the vector of observed responses.
The result of DesignedRegress is equivalent to that of Regress.
DesignedRegress[svd,response]fit the model represented by svd, the singular value decomposition of a design matrix, given the vector response of response data

Linear regression using the singular value decomposition of a design matrix.

DesignedRegress will also accept the singular value decomposition of the design matrix. If the regression is not weighted, this approach will save recomputing the design matrix decomposition.

This is the singular value decomposition of the design matrix.
When several responses are of interest, this will save recomputing the design matrix decomposition.