Character (computing)
From Wikipedia, the free encyclopedia
- For other uses, see character.
In computer and machine-based telecommunications terminology, a character is a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language.
An example of a character is a letter, numeral, or punctuation mark. The concept also includes control characters, which do not correspond to symbols in a particular natural language, but rather to other bits of information used to process text in one or more languages. Examples of control characters include carriage return or tab, as well as instructions to printers or other devices that display or otherwise process text.
Contents |
[edit] Character encoding
Computers and communication equipment represent characters using a character encoding that assigns each character to something — an integer quantity represented by a sequence of bits, typically — that can be stored or transmitted through a network. Two examples of popular encodings are ASCII and the UTF-8 encoding for Unicode. According to statistics collected by Google, UTF-8 is the most common encoding used on web pages [1]. While most character encodings map characters to numbers and/or bit sequences, Morse code instead represents characters using a series of electrical impulses of varying length.
[edit] Terminology
Historically, the term character has been widely used by industry professionals to refer to an encoded character (often only as exposed via a programming language's API). Likewise, character set has been widely used to refer to a specific repertoire of abstract characters that have been mapped to specific bit sequences. With the advent of Unicode and bit-agnostic encoding forms, more precise terminology is increasingly favored.
It is important, in some contexts, to make the distinction that a character is a unit of information, and thus does not imply any particular visual manifestation. For example, the Hebrew letter Aleph ("א") is often used by mathematicians to denote certain kinds of infinity, but it is also used in ordinary Hebrew text. In Unicode, these two uses are different characters and are signified by two different codes, though they may be rendered identically. Conversely, the Chinese logogram for water ("水") may have a slightly different appearance in Japanese texts than it does in Chinese texts, and local typefaces may reflect this. But they nonetheless represent the same information, are considered the same character, and share the same Unicode code point.
The term glyph is used to describe a particular physical appearance of a character. Many computer fonts consist of glyphs that are indexed by the Unicode code point of the character that each glyph represents.
The definition of character, or abstract character, is mutually defined by The Unicode Standard and ISO/IEC 10646 as "a member of a set of elements used for the organisation, control, or representation of data." Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things. The standards also differentiate between these abstract characters and coded characters or encoded characters that have been paired with numeric codes that facilitate their representation in computers.
[edit] See also
- Characters are often combined in strings
- Fill character
- Non-spacing character
[edit] External links
- Characters: A Brief Introduction by The Linux Information Project (LINFO)
- ISO/IEC TR 15285:1998 summarizes the ISO/IEC's character model, focusing on terminology definitions and differentiating between characters and glyphs