ebooksgratis.com

See also ebooksgratis.com: no banners, no cookies, totally FREE.

CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Stein's unbiased risk estimate - Wikipedia, the free encyclopedia

Stein's unbiased risk estimate

From Wikipedia, the free encyclopedia

In statistics, Stein's unbiased risk estimate (SURE) is an unbiased estimator of the mean-squared error of a given estimator, in a deterministic estimation scenario. In other words, it provides an indication of the accuracy of a given estimator. This is important since, in deterministic estimation, the true mean-squared error of an estimator generally depends on the value of the unknown parameter, and thus cannot be determined completely.

The technique is named after its discoverer, Charles Stein.[1]

[edit] Formal statement

Let \theta \in {\mathbb R}^n be an unknown deterministic parameter and let x be a measurement vector which is distributed normally with mean θ and covariance σ2I. Suppose h(x) is an estimator of θ from x. Then, Stein's unbiased risk estimate is given by

\mathrm{SURE}(h) = \|\theta\|^2 + \|h(x)\|^2 + 2 \sigma^2 \sum_{i=1}^n \frac{\partial h_i}{\partial x_i} - 2 \sigma^2 \sum_{i=1}^n x_i h_i(x)

where hi(x) is the ith component of the estimate.

The importance of SURE is that it is an unbiased estimate of the mean-squared error (or squared error risk) of h(x), i.e.

E \{ \mathrm{SURE}(h) \} = \mathrm{MSE}(h).\,\!

Thus, minimizing SURE can be expected to minimize the MSE. Except for the first term in SURE, which is identical for all estimators, there is no dependence on the unknown parameter θ in the expression for SURE above. Thus, it can be manipulated (e.g., to determine optimal estimation settings) without knowledge of θ.

[edit] Applications

A standard application of SURE is to choose a parametric form for an estimator, and then optimize the values of the parameters to minimize the risk estimate. This technique has been applied in several settings. For example, a variant of the James-Stein estimator can be derived by finding the optimal shrinkage estimator.[1] The technique has also been used by Donoho and Johnstone to determine the optimal shrinkage factor in a wavelet denoising setting.[2]

[edit] References

  1. ^ a b Stein, Charles M. (Nov. 1981). "Estimation of the Mean of a Multivariate Normal Distribution". The Annals of Statistics 9 (6): 1135-1151. 
  2. ^ Donoho, David L.; Iain M. Johnstone (Dec. 1995). "Adapting to Unknown Smoothness via Wavelet Shrinkage". Journal of the American Statistical Association 90 (432): 1200-1244. 


aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -