Audio signal processing
From Wikipedia, the free encyclopedia
This article does not cite any references or sources. (August 2006) Please help improve this article by adding citations to reliable sources. Unverifiable material may be challenged and removed. |
Audio signal processing, sometimes referred to as audio processing, is the processing of a representation of auditory signals, or sound. The representation can be digital or analog.
The focus in audio signal processing is most typically a mathematical analysis of which parts of the signal are audible. For example, a signal can be modified for different purposes such that the modification is controlled in the auditory domain.
Which parts of the signal are heard and which are not is determined both by physiology of the human hearing system and by human psychology. These properties are analysed within the field of psychoacoustics.
Contents |
[edit] History of audio processing
Audio processing was necessary for early radio broadcasting -- as there were many problems with studio to transmitter links.
[edit] Analog signals
An analog representation is usually electrical; a voltage level represents the air pressure waveform of the sound.
[edit] Digital signals
A digital representation expresses the pressure wave-form as a sequence of symbols, usually binary numbers, which permits digital signal processing. Whilst all real-world audio signals are continuous-time and continuous-level analog signals, the frequency range of these signals is limited by physical effects, and human ears cannot perceive frequencies below approx. 20 Hz or above approx. 18 kHz (strongly depends on the age of the listener). Therefore, there is no significant loss of information when the analog signal is sampled using a high enough sampling rate (see: sampling). In addition, the dynamic range of audio signals is limited by Noise (sound). More than 130 dB Signal-to-noise ratio is almost impossible to achieve. Therefore, quantization also does not result in significant loss of information either, if done appropriately. Both sampling and quantization must be applied to convert the continuous-time analog signal to a discrete-time digital representation. Although such a conversion is more or less lossy, most modern audio systems use this approach as the techniques of digital signal processing are much more powerful and efficient than analog domain signal processing.
[edit] Application areas
Processing methods and application areas include storage, level compression, data compression, transmission, enhancement (e.g., equalization, filtering, noise cancellation, echo or reverb removal or addition, etc.)
[edit] Audio Broadcasting
Audio broadcasting (be it for television or audio broadcasting) is perhaps the biggest market segment (and user area) for audio processing products -- globally.
Traditionally the most important audio processing (in audio broadcasting) takes place just before the transmitter. Studio audio processing is limited in the modern era due to digital audio systems (mixers, routers) being pervasive in the studio.
In audio broadcasting, the audio processor must
- prevent overmodulation, and minimize it when it occurs
- maximize overall loudness
- compensate for non-linear transmitters, more common with medium wave and shortwave broadcasting
|
|