# Linear regression

Linear regression is a form of regression analysis in which observational data are modeled by a least squares function which is a linear combination of the model parameters and depends on one or more independent variables. In simple linear regression the model function represents a straight line. The results of data fitting are subject to statistical analysis.

Example of linear regression with one dependent and one independent variable.

## Definitions

The data consist of m values $y_1, \ldots, y_m$ taken from observations of the dependent variable ( response variable) $y$ . The dependent variable is subject to error. This error is assumed to be random variable, with a mean of zero. Systematic error (e.g. mean ≠ 0) may be present but its treatment is outside the scope of regression analysis. The independent variable ( explanatory variable) $x$, is error-free. If this is not so, modeling should be done using errors-in-variables model techniques. The independent variables are also called regressors, exogenous variables, input variables and predictor variables. In simple linear regression the data model is written as

$y_i = \beta_1 + x_i \beta_2 + \varepsilon_i$

where $\epsilon_i$ is an observational error. $\beta_1$ (intercept) and $\beta_2$ (slope) are the parameters of the model. In general there are n parameters, $\beta_1, \ldots, \beta_n$ and the model can be written as

$y_i = \sum_{j=1}^{n} X_{ij} \beta_j + \varepsilon_i$

where the coefficients $X_{ij}$ are constants or functions of the independent variable, x. Models which do not conform to this specification must be treated by nonlinear regression.

Unless otherwise stated it is assumed that the observational errors are uncorrelated and belong to a normal distribution. This, or another assumption, is used when performing statistical tests on the regression results. An equivalent formulation of simple linear regression that explicitly shows the linear regression as a model of conditional expectation can be given as

$\mbox{E}(y|x) = \alpha + \beta x \,$

The conditional distribution of y given x is a linear transformation of the distribution of the error term.

### Notation and naming conventions

• Scalars and vectors are denoted by lower case letters.
• Matrices are denoted by upper case letters.
• Parameters are denoted by Greek letters.
• Vectors and matrices are denoted by bold letters.
• A parameter with a hat, such as $\hat\beta_j$, denotes a parameter estimator.

## Least-squares analysis

The first objective of regression analysis is to best-fit the data by adjusting the parameters of the model. Of the different criteria that can be used to define what constitutes a best fit, the least squares criterion is a very powerful one. The objective function, S, is defined as a sum of squared residuals, ri

$S=\sum_{i=1}^{m}r_i^2,$

where each residual is the difference between the observed value and the value calculated by the model:

$r_i = y_i-\sum_{j=1}^{n} X_{ij} \beta_j$

The best fit is obtained when S, the sum of squared residuals, is minimized. Subject to certain conditions, the parameters then have minimum variance ( Gauss-Markov theorem) and may also represent a maximum likelihood solution to the optimization problem.

From the theory of linear least squares, the parameter estimators are found by solving the normal equations

$\sum_{i=1}^{m}\sum_{k=1}^{n} X_{ij}X_{ik}\hat \beta_k=\sum_{i=1}^{m} X_{ij}y_i,~~~ j=1,\ldots,n$

In matrix notation, these equations are written as

$\mathbf{\left(X^TX\right) \boldsymbol {\hat \beta}=X^Ty}$,

And thus when the matrix $X^TX$ is non singular:

$\boldsymbol {\hat \beta} = \mathbf{\left(X^TX\right)^{-1} X^Ty}$,

Specifically, for straight-line fitting, this is shown in straight line fitting.

### Regression statistics

The second objective of regression is the statistical analysis of the results of data fitting.

Denote by $\sigma^2$ the variance of the error term $\epsilon$ (so that $\epsilon_i \sim N(0,\sigma^2)\,$ for every $i=1,\ldots,m$). An unbiased estimate of $\sigma ^2$ is given by

$\hat \sigma^2 = \frac {S} {m-n}$.

The relation between the estimate and the true value is:

$\hat\sigma^2 \sim \frac { \chi_{m-n}^2 } {m-n}\ \sigma^2$

where $\chi_{m-n}^2$ has Chi-square distribution with $m-n$ degrees of freedom.

Application of this test requires that $\sigma^2$, the variance of an observation of unit weight, be estimated. If the $\chi^2$ test is passed, the data may be said to be fitted within observational error.

The solution to the normal equations can be written as

$\hat\boldsymbol\beta=(\mathbf{X^TX)^{-1}X^Ty}$

This shows that the parameter estimators are linear combinations of the dependent variable. It follows that, if the observational errors are normally distributed, the parameter estimators will belong to a Student's t distribution with $m-n$ degrees of freedom. The standard deviation on a parameter estimator is given by

$\hat\sigma_j=\sqrt{ \frac{S}{m-n}\left[\mathbf{(X^TX)}^{-1}\right]_{jj}}$

The $100(1-\alpha)%$ confidence interval for the parameter, $\beta_j$, is computed as follows:

$\hat \beta_j \pm t_{\frac{\alpha }{2},m-n - 1} \hat \sigma_j$

The residuals can be expressed as

$\mathbf{\hat r = y-X \hat\boldsymbol\beta= y-X(X^TX)^{-1}X^Ty}\,$

The matrix$\mathbf{X(X^TX)^{-1}X^T}$ is known as the hat matrix and has the useful property that it is idempotent. Using this property it can be shown that, if the errors are normally distributed, the residuals will follow a Student's t distribution with $m-n$ degrees of freedom. Studentized residuals are useful in testing for outliers.

Given a value of the independent variable, xd, the predicted response is calculated as

$y_d=\sum_{j=1}^{n}X_{dj}\hat\beta_j$

Writing the elements $X_{dj},\ j=1,n$ as $\mathbf z$, the $100(1-\alpha)%$ mean response confidence interval for the prediction is given, using error propagation theory, by:

$\mathbf{z^T\hat\boldsymbol\beta} \pm t_{ \frac{\alpha }{2} ,m-n} \hat \sigma \sqrt {\mathbf{ z^T(X^T X)^{- 1}z }}$

The $100(1-\alpha)%$ predicted response confidence intervals for the data are given by:

$\mathbf z^T \hat\boldsymbol\beta \pm t_{\frac{\alpha }{2},m-n} \hat \sigma \sqrt {1 + \mathbf{z^T(X^TX)^{-1}z}}$.

#### Linear case

In the case that the formula to be fitted is a straight line, $y=\alpha+\beta x\!$, the normal equations are

$\begin{array}{lcl} m\ \alpha + \sum x_i\ \beta =\sum y_i \\ \sum x_i\ \alpha + \sum x_i^2\ \beta =\sum x_iy_i \end{array}$

where all summations are from i=1 to i=m. Thence, by Cramer's rule,

$\hat\beta = \frac {m \sum x_iy_i - \sum x_i \sum y_i} {\Delta} =\frac{\sum(x_i-\bar{x})(y_i-\bar{y})}{\sum(x_i-\bar{x})^2} \,$
$\hat\alpha = \frac {\sum x_i^2 \sum y_i - \sum x_i \sum x_iy_i} {\Delta}= \bar y-\bar x \hat\beta$

where

$\Delta = m\sum x_i^2 - \left(\sum x_i\right)^2$

The covariance matrix is

$\frac{1}{\Delta}\begin{pmatrix} \sum x_i^2 & -\sum x_i \\ -\sum x_i & m \end{pmatrix}$

The mean response confidence interval is given by

$y_d = (\alpha+\hat\beta x_d) \pm t_{ \frac{\alpha }{2} ,m-2} \hat \sigma \sqrt {\frac{1}{m} + \frac{(x_d - \bar{x})^2}{\sum (x_i - \bar{x})^2}}$

The predicted response confidence interval is given by

$y_d = (\alpha+\hat\beta x_d) \pm t_{ \frac{\alpha }{2} ,m-2} \hat \sigma \sqrt {1+\frac{1}{m} + \frac{(x_d - \bar{x})^2}{\sum (x_i - \bar{x})^2}}$

### Variance analysis

Variance analysis is similar to ANOVA in that the sum of squared residuals is split into two components. The regression sum of squares (or sum of squared residuals) SSR (also commonly called RSS) is given by:

$\mathit{SSR} = \sum {\left( {\hat y_i - \bar y} \right)^2 } = \hat\boldsymbol\beta^T \mathbf{X}^T \mathbf y - \frac{1}{n}\left( \mathbf {y^T u u^T y} \right)$

where $\bar y = \frac{1}{n} \sum y_i$ and u is an n by 1 unit vector (i.e. each element is 1). Note that the terms $\mathbf{y^T u}$ and $\mathbf{u^T y}$ are both equivalent to $\sum y_i$, and so the term $\frac{1}{n} \mathbf{y^T u u^T y}$ is equivalent to $\frac{1}{n}\left(\sum y_i\right)^2$.

The error (or unexplained) sum of squares ESS is given by:

$\mathit{ESS} = \sum \left( y_i - \hat y_i \right)^2 = \mathbf{ y^T y - \hat\boldsymbol\beta^T X^T y}$

The total sum of squares TSS is given by

${\mathit{TSS} = \sum\left( y_i-\bar y \right)^2 = \mathbf{ y^T y}-\frac{1}{n}\left( \mathbf{y^Tuu^Ty}\right)=\mathit{SSR}+ \mathit{ESS}}$

Pearson's coefficient of regression, R² is then given as

${R^2 = \frac{\mathit{SSR}}{{\mathit{TSS}}} = 1 - \frac{\mathit{ESS}}{\mathit{TSS}}}$

### Example

To illustrate the various goals of regression, we give an example. The following data set gives the average heights and weights for American women aged 30-39 (source: The World Almanac and Book of Facts, 1975).

 Height/ m 1.47 1.5 1.52 1.55 1.57 1.6 1.63 1.65 1.68 1.7 1.73 1.75 1.78 1.8 1.83 Weight/kg 52.21 53.12 54.48 55.84 57.2 58.57 59.93 61.29 63.11 64.47 66.28 68.1 69.92 72.19 74.46

A plot of weight against height (see below) shows that it cannot be modeled by a straight line, so a regression is performed by modeling the data by a parabola.

$y = \beta_0 + \beta_1 x+ \beta_2 x^2 +\epsilon\!$

where the dependent variable, y, is weight and the independent variable, x is height.

Place the coefficients, $1, x_i\ \mbox{and}\ x_i^2$, of the parameters for the ith observation in the ith row of the matrix X.

$X= \begin{bmatrix} 1&1.47&2.16\\ 1&1.50&2.25\\ 1&1.52&2.31\\ 1&1.55&2.40\\ 1&1.57&2.46\\ 1&1.60&2.56\\ 1&1.63&2.66\\ 1&1.65&2.72\\ 1&1.68&2.82\\ 1&1.70&2.89\\ 1&1.73&2.99\\ 1&1.75&3.06\\ 1&1.78&3.17\\ 1&1.81&3.24\\ 1&1.83&3.35\\ \end{bmatrix}$

The values of the parameters are found by solving the normal equations

$\mathbf{(X^TX)\boldsymbol\hat\beta=X^Ty}$

Element ij of the normal equation matrix, $\mathbf{X^TX}$ is formed by summing the products of column i and column j of X.

$X_{ij}=\sum_{k=1}^{k=15} X_{ki}X_{kj}$

Element i of the right-hand side vector $\mathbf{X^Ty}$ is formed by summing the products of column i of X with the column of independent variable values.

$\left(\mathbf{X^Ty}\right)_i=\sum X_{ki} y_k$

Thus, the normal equations are

$\begin{bmatrix} 15&24.76&41.05\\ 24.76&41.05&68.37\\ 41.05&68.37&114.35\\ \end{bmatrix} \begin{bmatrix} \hat\beta_0\\ \hat\beta_1\\ \hat\beta_2\\ \end{bmatrix} = \begin{bmatrix} 931\\ 1548\\ 2586\\ \end{bmatrix}$
$\hat\beta_0=129 \pm 16$ (value $\pm$ standard deviation)
$\hat\beta_1=-143 \pm 20$
$\hat\beta_2=62 \pm 6$

The calculated values are given by

$y^{calc}_i = \hat\beta_0 + \hat\beta_1 x_i+ \hat\beta_2 x^2_i$

The observed and calculated data are plotted together and the residuals, $y_i-y^{calc}_i$, are calculated and plotted. Standard deviations are calculated using the sum of squares, $S=0.76$.

The confidence intervals are computed using:

$[\hat{\beta_j}-\sigma_j t_{m-n;1-\frac{\alpha}{2}};\hat{\beta_j}+\sigma_j t_{m-n;1-\frac{\alpha}{2}}]$

with $\alpha$=5%, $t_{m-n;1-\frac{\alpha}{2}}$ = 2.2. Therefore, we can say that the 95% confidence intervals are:

$\beta_0\in[92.9,164.7]$
$\beta_1\in[-186.8,-99.5]$
$\beta_2\in[48.7,75.2]$

### Checking model assumptions

The model assumptions are checked by calculating the residuals and plotting them. The following plots can be constructed to test the validity of the assumptions:

1. Residuals against the explanatory variable, as illustrated above.
2. A time series plot of the residuals, that is, plotting the residuals as a function of time.
3. Residuals against the fitted values, $\hat\mathbf y\,$.
4. Residuals against the preceding residual.
5. A normal probability plot of the residuals to test normality. The points should lie along a straight line.

There should not be any noticeable pattern to the data in all but the last plot

### Checking model validity

The validity of the model can be checked using any of the following methods:

1. Using the confidence interval for each of the parameters, $\hat\beta_j$. If the confidence interval includes 0, then the parameter can be removed from the model. Ideally, a new regression analysis excluding that parameter would need to be performed and continued until there are no more parameters to remove.
2. When fitting a straight line, calculate Pearson’s coefficient of regression. The closer the value is to 1; the better the regression is. This coefficient gives what fraction of the observed behaviour can be explained by the given variables.
3. Examining the observational and prediction confidence intervals. The smaller they are the better.
4. Computing the F-statistics.

## Other procedures

### Weighted least squares

Weighted least squares is a generalisation of the least squares method, used when the observational errors have unequal variance.

### Errors-in-variables model

Errors-in-variables model or total least squares when the independent variable is subject to error

### Generalized linear model

Generalized linear model is used when the distribution function of the errors is not a Normal distribution. Examples include Exponential distribution, gamma distribution, Inverse Gaussian distribution, Poisson distribution, binomial distribution, multinomial distribution

### Robust regression

A host of alternative approaches to the computation of regression parameters are included in the category known as robust regression. One technique minimizes the mean absolute error, or some other function of the residuals, instead of mean squared error as in linear regression. Robust regression is much more computationally intensive than linear regression and is somewhat more difficult to implement as well. While least squares estimates are not very sensitive to breaking the normality of the errors assumption, this is not true when the variance or mean of the error distribution is not bounded, or when an analyst that can identify outliers is unavailable.

Among Stata users, Robust regression is frequently taken to mean linear regression with Huber-White standard error estimates due to the naming conventions for regression commands. This procedure relaxes the assumption of homoscedasticity for variance estimates only; the predictors are still ordinary least squares (OLS) estimates. This occasionally leads to confusion; Stata users sometimes believe that linear regression is a robust method when this option is used, although it is actually not robust in the sense of outlier-resistance.

## Applications of linear regression

Linear regression is widely used in biological, behavioural and social sciences to describe relationships between variables. It ranks as one of the most important tools used in these disciplines.

### The trend line

A trend line represents a trend, the long-term movement in time series data after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line.

Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data.

### Medicine

As one example, early evidence relating tobacco smoking to mortality and morbidity came from studies employing regression. Researchers usually include several variables in their regression analysis in an effort to remove factors that might produce spurious correlations. For the cigarette smoking example, researchers might include socio-economic status in addition to smoking to ensure that any observed effect of smoking on mortality is not due to some effect of education or income. However, it is never possible to include all possible confounding variables in a study employing regression. For the smoking example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason, randomized controlled trials are considered to be more trustworthy than a regression analysis.

### Finance

The capital asset pricing model uses linear regression as well as the concept of Beta for analyzing and quantifying the systematic risk of an investment. This comes directly from the Beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets.

Regression may not be the appropriate way to estimate beta in finance given that it is supposed to provide the volatility of an investment relative to the volatility of the market as a whole. This would require that both these variables be treated in the same way when estimating the slope. Whereas regression treats all variability as being in the investment returns variable, i.e. it only considers residuals in the dependent variable.