Unicode normalization
From Wikipedia, the free encyclopedia
Unicode normalization is a form of text normalization that transforms equivalent characters or sequences of characters into a consistent underlying representation so that they may be easily compared. Normalization is important when comparing text strings for searching and sorting (collation).
Contents |
[edit] Composition and decomposition
Underlying Unicode's normalization methods is the concept of character composition and decomposition. Character composition is the process of combining simpler characters into fewer precomposed characters, such as the n character and the combining ~ character into the single ñ character. Decomposition is the opposite process, breaking precomposed characters back into their component pieces.
Unicode composes combining characters and decomposes compatibility characters based on what it calls equivalence. Unicode has two standards of varying breadth for this equivalence: canonical, which distinguishes between functionally and what should be visually indistinguishable characters, and other compatibility characters, which may be visually and even semantically distinct. See the articles on Unicode equivalence and Unicode compatibility characters for more information.
[edit] Standards
Unicode defines four normalization standards.
NFD Normalization Form Canonical Decomposition |
Characters are decomposed by canonical equivalence. |
NFC Normalization Form Canonical Composition |
Characters are decomposed and then recomposed by canonical equivalence. It is possible for the result to be a different sequence of characters than the original. |
NFKD Normalization Form Compatibility Decomposition |
Characters are decomposed by compatibility equivalence. |
NFKC Normalization Form Compatibility Composition |
Characters are decomposed by compatibility equivalence, then recomposed by canonical equivalence. |
All the above methods will standardize the order in which decomposed characters appear, even sequences that were already decomposed prior to normalization. They may also replace characters or sequences with equivalent characters or sequences even if it doesn't result in the number of characters changing. These are done to achieve the consistency in encoding required for normalization.
[edit] See also
- Unicode
- Unicode equivalence
- Unicode compatibility characters
- Precomposed character
- Ligature (typography)