ebooksgratis.com

See also ebooksgratis.com: no banners, no cookies, totally FREE.

CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Particle filter - Wikipedia, the free encyclopedia

Particle filter

From Wikipedia, the free encyclopedia

Result of particle filtering (red line) based on observed data generated from the blue line (Much larger image)
Result of particle filtering (red line) based on observed data generated from the blue line (Much larger image)

Particle filters, also known as sequential Monte Carlo methods (SMC), are sophisticated model estimation techniques based on simulation.

They are usually used to estimate Bayesian models and are the sequential ('on-line') analogue of Markov chain Monte Carlo (MCMC) batch methods and are often similar to importance sampling methods. If well-designed, particle filters can be much faster than MCMC. They are often an alternative to the Extended Kalman filter (EKF) or Unscented Kalman filter (UKF) with the advantage that, with sufficient samples, they approach the Bayesian optimal estimate, so they can be made more accurate than either the EKF or UKF. The approaches can also be combined by using a version of the Kalman filter as a proposal distribution for the particle filter.

Contents

[edit] Goal

The particle filter aims to estimate the sequence of hidden parameters, xk for k=0,1,2,3, \ldots, based only on the observed data yk for k=0,1,2,3, \ldots. All Bayesian estimates of xk follow from the posterior distribution p(x_k|y_0,y_1,\ldots,y_k). In contrast, the MCMC or importance sampling approach would model the full posterior p(x_0,x_1,\ldots,x_k|y_0,y_1,\ldots,y_k).

[edit] Model

Particle methods assume xk and the observations yk can be modeled in this form:

and with an initial distribution p(x0).

  • The observations y_0, y_1, \ldots are conditionally independent provided that x_0, x_1, \ldots are known
In other words, each yk only depends on xk
y_k|x_k \sim p_{y|x}(y|x_k)

One example form of this scenario is

x_k = f(x_{k-1}) + v_k \,
y_k = h(x_k) + w_k \,

where both vk and wk are mutually independent and identically distributed sequences with known probability density functions and f(\cdot) and h(\cdot) are known functions. These two equations can be viewed as state space equations and look similar to the state space equations for the Kalman filter. If the functions f(\cdot) and h(\cdot) were linear, and if both vk and wk were Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter based methods are a first-order approximation. Particle filters are also an approximation, but with enough particles can be much more accurate.

[edit] Monte Carlo approximation

Particle methods, like all sampling-based approaches (e.g., MCMC), generate a set of samples that approximate the filtering distribution p(x_k|y_0,\dots,y_k). So, with P samples, expectations with respect to the filtering distribution are approximated by

\int f(x_k)p(x_k|y_0,\dots,y_k)dx_k\approx\frac1P\sum_{L=1}^Pf(x_k^{(L)})

and f(\cdot), in the usual way for Monte Carlo, can give all the moments etc. of the distribution up to some degree of approximation.

Generally, the algorithm is repeated iteratively for a specific number of k values (call this N). Initializing xk = 0 | k = 0 for all particles provides a starting place to generate x1, which can then be used to generate x2, which can be used to generate x3 and so on up to k = N. When done, the mean of xk over all the particles (or \frac{1}{P}\sum_{L=1}^P x_k^{(L)}) is approximately the actual value of xk (L being the particle number).

[edit] Sampling Importance Resampling (SIR)

Sampling importance resampling (SIR) is a very commonly used particle filtering algorithm, which approximates the filtering distribution p(x_k|y_0,\ldots,y_k) by a weighted set of P particles

\{(w^{(L)}_k,x^{(L)}_k)~:~L=1,\ldots,P\}.

The importance weights w^{(L)}_k are approximations to the relative posterior probabilities (or densities) of the particles such that \sum_{L=1}^P w^{(L)}_k = 1.

SIR is a sequential (i.e., recursive) version of importance sampling. As in importance sampling, the expectation of a function f(\cdot) can be approximated as a weighted average


\int f(x_k) p(x_k|y_0,\dots,y_k) dx_k \approx
\sum_{L=1}^P w^{(L)} f(x_k^{(L)}).

For a finite set of particles, the algorithm performance is dependent on the choice of the proposal distribution

\pi(x_k|x_{0:k-1},y_{0:k})\, .

The optimal proposal distribution is given as the target distribution


\pi(x_k|x_{0:k-1},y_{0:k}) = p(x_k|x_{k-1},y_{k}). \,

However, the transition prior is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations:


\pi(x_k|x_{0:k-1},y_{0:k}) = p(x_k|x_{k-1}). \,

Sampling Importance Resampling (SIR) filters with transition prior as importance function are commonly known as bootstrap filter and condensation algorithm.

Resampling is used to avoid the problem of degeneracy of the algorithm, that is, avoiding the situation that all but one of the importance weights are close to zero. The performance of the algorithm can be also affected by proper choice of resampling method. The stratified resampling proposed by Kitagawa (1996) is optimal in terms of variance.

A single step of sequential importance resampling is as follows:

1) For L=1,\ldots,P draw samples from the proposal distribution

x^{(L)}_k \sim \pi(x_k|x^{(L)}_{0:k-1},y_{0:k})
2) For L=1,\ldots,P update the importance weights up to a normalizing constant:

\hat{w}^{(L)}_k = w^{(L)}_{k-1}
\frac{p(y_k|x^{(L)}_k) p(x^{(L)}_k|x^{(L)}_{k-1})}
{\pi(x_k^{(L)}|x^{(L)}_{0:k-1},y_{0:k})}.
3) For L=1,\ldots,P compute the normalized importance weights:

w^{(L)}_k = \frac{\hat{w}^{(L)}_k}{\sum_{J=1}^P \hat{w}^{(J)}_k}
4) Compute an estimate of the effective number of particles as

\hat{N}_\mathit{eff} = \frac{1}{\sum_{L=1}^P\left(w^{(L)}_k\right)^2}
5) If the effective number of particles is less than a given threshold \hat{N}_\mathit{eff} < N_{thr}, then perform resampling:
a) Draw P particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.
b) For L=1,\ldots,P set w^{(L)}_k = 1/P.


The term Sequential Importance Resampling is also sometimes used when referring to SIR filters.

[edit] Sequential Importance Sampling (SIS)

  • Is the same as Sampling Importance Resampling, but without the resampling stage.


[edit] "Direct version" algorithm

The "direct version" algorithm is rather simple (compared to other particle filtering algorithms) and it uses composition and rejection. To generate a single sample x at k from p_{x_k|y_{1:k}}(x|y_{1:k}):

1) Set p=1
2) Uniformly generate L from {1,...,P}
3) Generate a test \hat{x} from its distribution p_{x_k|x_{k-1}}(x|x_{k-1|k-1}^{(L)})
4) Generate the probability of \hat{y} using \hat{x} from p_{y|x}(y_k|\hat{x}) where yk is the measured value
5) Generate another uniform u from [0,mk]
6) Compare u and \hat{y}
6a) If u is larger then repeat from step 2
6b) If u is smaller then save \hat{x} as xk | k(p) and increment p
7) If p > P then quit

The goal is to generate P "particles" at k using only the particles from k − 1. This requires that a Markov equation can be written (and computed) to generate a xk based only upon xk − 1. This algorithm uses composition of the P particles from k − 1 to generate a particle at k and repeats (steps 2-6) until P particles are generated at k.

This can be more easily visualized if x is viewed as a two-dimensional array. One dimension is k and the other dimensions is the particle number. For example, x(k,L) would be the Lth particle at k and can also be written x_k^{(L)} (as done above in the algorithm). Step 3 generates a potential xk based on a randomly chosen particle (x_{k-1}^{(L)}) at time k − 1 and rejects or accepts it in step 6. In other words, the xk values are generated using the previously generated xk − 1.

[edit] Other Particle Filters

  • Auxiliary particle filter
  • Gaussian particle filter
  • Unscented particle filter
  • Monte Carlo particle filter
  • Gauss-Hermite particle filter
  • Cost Reference particle filter

[edit] See also

[edit] References

  • Cappe, O.; Moulines, E.; Ryden, T. (2005). Inference in Hidden Markov Models. Springer. 
  • Liu, J. (2001). Monte Carlo strategies in Scientific Computing. Springer. 
  • Ristic, B.; Arulampalam, S.; Gordon, N. (2004). Beyond the Kalman Filter: Particle Filters for Tracking Applications. Artech House. 


  • Cappe, O.; Godsill, S.; Moulines, E.; (2007). "An overview of existing methods and recent advances in sequential Monte Carlo". Proceedings of IEEE 95 (5). 

[edit] External links


aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -