ebooksgratis.com

See also ebooksgratis.com: no banners, no cookies, totally FREE.

CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Mean squared prediction error - Wikipedia, the free encyclopedia

Mean squared prediction error

From Wikipedia, the free encyclopedia

In statistics the mean squared prediction error of a smoothing procedure is the expected sum of squared deviations of the fitted values \widehat{g} from the (unobservable) function g. If the smoothing procedure has operator matrix L, then

\operatorname{MSPE}(L)=\operatorname{E}\left[\sum_{i=1}^n\left( g(x_i)-\widehat{g}(x_i)\right)^2\right].

The MSPE can be decomposed into two terms just like mean squared error is decomposed into bias and variance; however for MSPE one term is the sum of squared biases of the fitted values and another the sum of variances of the fitted values:

\operatorname{MSPE}(L)=\sum_{i=1}^n\left(\operatorname{E}\left[\widehat{g}(x_i)\right]-g(x_i)\right)^2+\sum_{i=1}^n\operatorname{var}\left[\widehat{g}(x_i)\right].

Note that knowledge of g is required in order to calculate MSPE exactly.

[edit] Estimation of MSPE

For the model y_i=g(x_i)+\sigma\varepsilon_i where \varepsilon_i\sim\mathcal{N}(0,1), one may write

\operatorname{MSPE}(L)=g'(I-L)'(I-L)g+\sigma^2\operatorname{tr}\left[L'L\right].

The first term is equivalent to

\sum_{i=1}^n\left(\operatorname{E}\left[\widehat{g}(x_i)\right]-g(x_i)\right)^2
=\operatorname{E}\left[\sum_{i=1}^n\left(y_i-\widehat{g}(x_i)\right)^2\right]-\sigma^2\operatorname{tr}\left[\left(I-L\right)'\left(I-L\right)\right].

Thus,

\operatorname{MSPE}(L)=\operatorname{E}\left[\sum_{i=1}^n\left(y_i-\widehat{g}(x_i)\right)^2\right]-\sigma^2\left(n-2\operatorname{tr}\left[L\right]\right).

If σ2 is known or well-estimated by \widehat{\sigma}^2, it becomes possible to estimate MSPE by

\operatorname{\widehat{MSPE}}(L)=\sum_{i=1}^n\left(y_i-\widehat{g}(x_i)\right)^2-\widehat{\sigma}^2\left(n-2\operatorname{tr}\left[L\right]\right).

Colin Mallows advocated this method in the construction of his model selection statistic Cp, which is a normalized version of the estimated MSPE:

C_p=\frac{\sum_{i=1}^n\left(y_i-\widehat{g}(x_i)\right)^2}{\widehat{\sigma}^2}-n+2\operatorname{tr}\left[L\right].

where p comes from that fact that the number of parameters p estimated for a parametric smoother is given by p=\operatorname{tr}\left[L\right], and C is in honor of Cuthbert Daniel.

[edit] See also


aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -