ebooksgratis.com

See also ebooksgratis.com: no banners, no cookies, totally FREE.

CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Unicode normalization - Wikipedia, the free encyclopedia

Unicode normalization

From Wikipedia, the free encyclopedia

Unicode normalization is a form of text normalization that transforms equivalent characters or sequences of characters into a consistent underlying representation so that they may be easily compared. Normalization is important when comparing text strings for searching and sorting (collation).

Contents

[edit] Composition and decomposition

Underlying Unicode's normalization methods is the concept of character composition and decomposition. Character composition is the process of combining simpler characters into fewer precomposed characters, such as the n character and the combining ~ character into the single ñ character. Decomposition is the opposite process, breaking precomposed characters back into their component pieces.

Unicode composes combining characters and decomposes compatibility characters based on what it calls equivalence. Unicode has two standards of varying breadth for this equivalence: canonical, which distinguishes between functionally and what should be visually indistinguishable characters, and other compatibility characters, which may be visually and even semantically distinct. See the articles on Unicode equivalence and Unicode compatibility characters for more information.

[edit] Standards

Unicode defines four normalization standards.

NFD
Normalization Form Canonical Decomposition
Characters are decomposed by canonical equivalence.
NFC
Normalization Form Canonical Composition
Characters are decomposed and then recomposed by canonical equivalence. It is possible for the result to be a different sequence of characters than the original.
NFKD
Normalization Form Compatibility Decomposition
Characters are decomposed by compatibility equivalence.
NFKC
Normalization Form Compatibility Composition
Characters are decomposed by compatibility equivalence, then recomposed by canonical equivalence.

All the above methods will standardize the order in which decomposed characters appear, even sequences that were already decomposed prior to normalization. They may also replace characters or sequences with equivalent characters or sequences even if it doesn't result in the number of characters changing. These are done to achieve the consistency in encoding required for normalization.

[edit] See also

[edit] External links


aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -