ebooksgratis.com

See also ebooksgratis.com: no banners, no cookies, totally FREE.

CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Variable-order Markov model - Wikipedia, the free encyclopedia

Variable-order Markov model

From Wikipedia, the free encyclopedia

Variable-order Markov (VOM) models are an important class of models that extend the well known Markov chain models. In contrast to the Markov chain models, where each random variable in a sequence with a Markov property depends on a fixed number of random variables, in VOM models this number of conditioning random variables may vary based on the specific observed realization.

This realization sequence is often called the context; therefore the VOM models are also called context trees [1]. The flexibility in the number of conditioning random variables turns out to be of real advantage for many applications, such as statistical analysis, classification and prediction.[2][3][4]

Contents

[edit] Example

Consider for example a sequence of random variables, each of which takes a value from the ternary alphabet {abc}. Specifically, consider the string aaabcaaabcaaabcaaabc...aaabc constructed from infinite concatenations of the sub-string aaabc.

The VOM model of maximal order 2 can approximate the above string using only the following five conditional probability components: {Pr(a|aa) = 0.5, Pr(b|aa) = 0.5, Pr(c|b) = 1.0, Pr(a|c)= 1.0, Pr(a|ca)= 1.0}.

In this example, Pr(c|ab) = Pr(c|b) = 1.0; therefore, the shorter context b is sufficient to determine the next character. Similarly, the VOM model of maximal order 3 can generate the string exactly using only five conditional probability components, which are all equal to 1.0.

To construct the Markov chain of order 1 for the next character in that string, one must estimate the following 9 conditional probability components: {Pr(a|a), Pr(a|b), Pr(a|c), Pr(b|a), Pr(b|b), Pr(b|c), Pr(c|a), Pr(c|b), Pr(c|c)}. To construct the Markov chain of order 2 for the next character, one must estimate 27 conditional probability components: {Pr(a|aa), Pr(a|ab), ..., Pr(c|cc)}. And to construct the Markov chain of order three for the next character one must estimate the following 81 conditional probability components: {Pr(a|aaa), Pr(a|aab), ..., Pr(c|ccc)}.

In practical settings there is seldom sufficient data to accurately estimate the exponentially increasing number of conditional probability components as the order of the Markov chain increases.

The variable-order Markov model assumes that in realistic settings, there are certain realizations of states (represented by contexts) in which some past states are independent from the future states; accordingly, "a great reduction in the number of model parameters can be achieved."[1]

[edit] Definition

Let A be a state space (finite alphabet) of size |A|.

Consider a sequence with the Markov property x_1^{n}=x_1x_2\dots x_n of n realizations of random variables, where  x_i\in  A is the state (symbol) at position i 1≤in, and the concatenation of states xi and xi + 1 is denoted by xixi + 1.

Given a training set of observed states, x_1^{n}, the construction algorithm of the VOM models[2][3][4] learns a model P that provides a probability assignment for each state in the sequence given its past (previously observed symbols) or future states.

Specifically, the learner generates a conditional probability distribution P(xi | s) for a symbol x_i \in A given a context s\in A^*, where the * sign represents a sequence of states of any length, including the empty context.

VOM models attempt to estimate conditional distributions of the form P(xi | s) where the context length |s|≤D varies depending on the available statistics. In contrast, conventional Markov models attempt to estimate these conditional distributions by assuming a fixed contexts' length |s|=D and, hence, can be considered as special cases of the VOM models.

Effectively, for a given training sequence, the VOM models are found to obtain better model parameterization than the fixed-order Markov models that leads to a better variance-bias tradeoff of the learned models.[2][3][4]

[edit] Application areas

Various efficient algorithms have been devised for estimating the parameters of the VOM model.[3]

VOM models have been successfully applied to areas such as machine learning, information theory and bioinformatics, including specific applications such as coding and data compression,[1] document compression,[3] classification and identification of DNA and protein sequences,[2] statistical process control,[4] spam filtering[5] and others.

[edit] See also

[edit] References

  1. ^ a b c Rissanen, J. (Sep 1983). "A Universal Data Compression System". IEEE Transactions on Information Theory 29 (5): 656–664. doi:10.1109/TIT.1983.1056741. 
  2. ^ a b c d Shmilovici, A.; Ben-Gal, I. (2007). "Using a VOM Model for Reconstructing Potential Coding Regions in EST Sequences". Computational Statistics 22 (1): 49–69. doi:10.1007/s00180-007-0021-8. 
  3. ^ a b c d e Begleiter, R.; El-Yaniv, R. and Yona, G. (2004). "On Prediction Using Variable Order Markov Models". Journal of Artificial Intelligence Research 22: 385–421. 
  4. ^ a b c d Ben-Gal, I.; Morag, G. and Shmilovici, A. (2003). "CSPC: A Monitoring Procedure for State Dependent Processes". Technometrics 45 (4): 293–311. doi:10.1198/004017003000000122. 
  5. ^ Bratko, A.; Cormack, G. V., Filipic, B., Lynam, T. and Zupan, B. (2006). "Spam Filtering Using Statistical Data Compression Models". Journal of Machine Learning Research 7: 2673–2698. 


aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -