ebooksgratis.com

See also ebooksgratis.com: no banners, no cookies, totally FREE.

CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Least-squares estimation of linear regression coefficients - Wikipedia, the free encyclopedia

Least-squares estimation of linear regression coefficients

From Wikipedia, the free encyclopedia

In parametric statistics, the least-squares estimator is often used to estimate the coefficients of a linear regression. The least-squares estimator optimizes a certain criterion (namely it minimizes the sum of the square of the residuals). In this article, after setting the mathematical context of linear regression, we will motivate the use of the least-squares estimator \widehat{\theta}_{LS} and derive its expression (as seen for example in the article regression analysis):

\widehat{\theta}_{LS}=(\mathbf{X}^t\mathbf{X})^{-1}\mathbf{X}^t\vec{Y}

We conclude by giving some qualities of this estimator and a geometrical interpretation.

Contents

[edit] Assumptions

For p\in\mathbb{N}^+, let Y be a random variable taking values in \mathbb{R}, we call observation.

We next define the function η, linear in θ:

\eta(X;\theta)=\sum_{j=1}^p \theta_j X_j,

where

  • For j\in \{1,...,p\}, Xj is a random variable taking values in \mathbb{R} and is called a factor and
  • θj is a scalar, for j\in \{1,...,p\}, and \theta^t=(\theta_1,\cdots,\theta_p), where θt denotes the transpose of vector θ.

Let X^t=(X_1,\cdots,X_p). We can write η(X;θ) = Xtθ. Define the error to be:

\varepsilon(\theta)=Y-X^t\theta

We suppose that there exists a true parameter \overline{\theta}\in\mathbb{R}^{p} such that \mathbb{E}[\varepsilon(\overline{\theta})|X]=0. This means that, given the random variables (X_1,\cdots,X_p), the best prediction we can make of Y is Y=\eta(X;\overline{\theta})=X^t\overline{\theta}. Henceforth, \varepsilon will denote \varepsilon(\overline{\theta}) and η will represent \eta(X;\overline{\theta}).

[edit] Least-squares estimator

The idea behind the least-squares estimator is to see linear regression as an orthogonal projection. Let F be the L2-space of all random variables whose square has a finite Lebesgue integral. Let G be the linear subspace of F generated by X_1,\cdots,X_p (supposing that Y\in F and (X_1,\cdots,X_p)\in F^p). We show in this paragraph that the function η is an orthogonal projection of Y on G and we will construct the least-squares estimator.

[edit] Seeing linear regression as an orthogonal projection

We have \mathbb{E}(Y|X)=\eta, but Y\mapsto\mathbb{E}(Y|X) is a projection, which means that η is a projection of Y on G. What is more, this projection is an orthogonal one.

To see this, we can build a scalar product in F: for all couples of random variables X,Y\in F, we define\langle X,Y\rangle_2:=\mathbb{E}[X Y]. It is indeed a scalar product because if \|X\|_2^2=0, then X = 0 almost everywhere (where \|X\|_2^2:=\langle X,X\rangle_2 is the norm corresponding to this scalar product).

For all 1\leq j\leq p,

\langle X_j,\varepsilon \rangle_2 =\langle X_j,Y-X^t \overline{\theta}\rangle_2
=\langle X_j,Y\rangle_2-\langle X_j,\mathbb{E}[Y|X]\rangle_2
=\mathbb{E}[X_j Y] - \mathbb{E}[X_j \mathbb{E}[Y|X]]
=X_j(\mathbb{E}Y-\mathbb{E}[\mathbb{E}[Y|X]])
=X_j(\mathbb{E}Y - \mathbb{E}Y)
\langle X_j,\varepsilon \rangle_2 = 0

Therefore, \varepsilon is orthogonal to any Xj and hence to the whole of the subspace G, which means that η is a projection of Y on G, orthogonal with respect to the scalar product we have just defined. We have therefore shown:

\eta(X;\overline{\theta})=\min_{f\in G}\|Y-f\|^2_2.

[edit] Estimating the coefficients

If, for each j\in\{1,\cdots,p\} we have a sample of size n>p, (X^1_j,\cdots,X^n_j) of Xj, along with a vector \vec{Y} of n observations of Y, we can build an estimation of the coefficients of this orthogonal projection. To do this, we can use an estimation of the scalar product defined earlier.

For all couples of samples of size n \vec{U},\vec{V}\in F^n of random variables U and V, we define \langle \vec{U},\vec{V}\rangle:=\vec{U}^t \vec{V}, where \vec{U}^t is the transpose of vector \vec{U}, and \|\cdot\|:=\sqrt{\langle \cdot,\cdot\rangle}. Note that the scalar product \langle \cdot,\cdot\rangle is defined in Fn and no longer in F.

Let us define the design matrix (or random design), a n\times p random matrix:\mathbf{X}=\left[\begin{matrix}X_1^1&\cdots&X^1_p\\\vdots&&\vdots\\X^n_1&\cdots&X^n_p\end{matrix}\right]

We can now adapt the minimization of the sum of the residuals: the least-squares estimator \widehat{\theta}_{LS} will be the value, if it exists, of θ which minimizes \|\mathbf{X}\theta-\vec{Y}\|^2. Therefore, \langle \mathbf{X},\vec{\varepsilon}(\widehat{\theta}_{LS})\rangle= \mathbf{X}^t(\mathbf{X}\widehat{\theta}_{LS}-\vec{Y})=0.

This yields \mathbf{X}^t \mathbf{X} \widehat{\theta}_{LS} = \mathbf{X}^t \vec{Y}. If \mathbf{X} is of full rank, then so is \mathbf{X}^t \mathbf{X}. In that case we can compute the least-squares estimator explicitly by inverting the p\times p matrix \mathbf{X}^t\mathbf{X}:

\widehat{\theta}_{LS}=(\mathbf{X}^t\mathbf{X})^{-1} \mathbf{X}^t \vec{Y}

[edit] Qualities and geometrical interpretation

[edit] Qualities of this estimator

Not only is the least-square estimator easy to compute, but under the Gauss-Markov assumptions, the Gauss-Markov theorem states that the least-square estimators is the best linear unbiased estimator (BLUE) of \overline{\theta}.

The vector of errors \vec{\varepsilon}=\vec{Y}-\mathbf{X}\overline{\theta} is said to fulfil the Gauss-Markov assumptions if:

  • \mathbb{E}\vec{\varepsilon}=\vec{0}
  • \mathbb{V}\vec{\varepsilon}=\sigma^2 \mathbf{I}_n (uncorrelated but not necessarily independent; homoscedastic but not necessarily identically distributed)

where \sigma^2<+\infty and \mathbf{I}_n is the n\times n identity matrix.

This decisive advantage has led to a sometimes abusive use of least-squares. Least-squares depends on the fulfilment of the Gauss-Markov hypothesis and applying this method in a situation where these conditions are not met can lead to inaccurate results. For example, in the study of time-series, it is often difficult to assume independence of the residuals.

[edit] Geometrical interpretation

The situation described by the linear regression problem can be geometrically seen as follows:

Image:Pythagoras_projection.jpg

The least-squares is also an M-estimator of ρ-type for \rho(r):=\frac{r^2}{2}.


aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -