I occasionally come across a concept that, while obvious to those who know it, isn’t as widely spread as it should be. One such question is, “What is the difference between analog and digital logic?” There are a lot of people that know the answer in an abstract way, but the real distinction is both simple and sublime.

The abstract answer is that an analog signal is continuous while a digital signal is discrete, ones and zeros. When people think of analog technology, a common example is cassette tape, hissing and popping, compared to the relative clarity of a CD or MP3. The common sense reason given why the digital version sounds better is because the ones and zeros are conceptually perfect, matching the original recording. Also, copies are perfect as well because the ones and zeros can be exactly duplicated.

However, this explanation begins to break down when you consider it closely. Due to the quantization problem, each digital representation (sample) of the waveform at a moment in time is inexact because you can always divide it into smaller and smaller parts. So an analog signal at one point in time is more perfect than its digital sample. Also, the lower the sampling rate, the greater the error due to aliasing. This is because a discrete sampling method cannot capture changes in the waveform that occur between samples. Thus, an ideal analog signal is always more accurate than its digital representation.

Going even deeper, there is no such thing as a purely digital signal. When expressed in terms of voltage, a one might be 5V and a zero, 0V. But no actual circuit can make an instantaneous transition from 0 to 5V or back. There’s always some small amount of time where the voltage is rising or falling. In really high-speed circuits or over longer distances, a signal can be both a one and a zero at different points on the wire. This is what engineers mean when they say a circuit has to be modeled as a transmission line. So even digital circuits are actually analog underneath. Digital is really just another way of imposing meaning on an analog circuit.

If analog is better, why do we even have digital? The answer is twofold: noise and dynamic range. Noise is always present in a signal. If restricted to a narrow frequency or time band, noise can often be filtered. However, there is always a danger of throwing out useful data along with the noise, especially both are if in a similar frequency range. Here is an example of a signal with easily-filtered noise — their frequencies and amplitudes are quite different.

Dynamic range is the difference between the lowest and highest signal level. A system can’t have an infinite voltage, so there is always some limit to the highest value an analog signal can represent. In contrast, a digital signal can represent arbitrary ranges just by adding more bits (32 bits not enough range? Try 64!)

In the next post, we’ll examine noise in more detail to understand the main difference between digital and analog logic.

Explaining some myths of “digital” audio playback.

http://www.audioholics.com/education/audio-formats-technology/exploring-digital-audio-myths-and-reality-part-1

Great article. I am talking about signals with components above the Nyquist frequency, as it mentions.

Nit pick:

“So an analog signal at one point in time is more perfect than its digital sample. Also, the lower the sampling rate, the greater the error due to aliasing. This is because a discrete sampling method cannot capture changes in the waveform that occur between samples.”

This isn’t quite true. A smart guy named Nyquist (http://en.wikipedia.org/wiki/Harry_Nyquist) proved that if you have a band-limited signal, you can represent all of its information with a discrete-time approximation if you sample quickly enough. Aliasing only happens if your continuous signal has components above the Nyquist frequency. Typical systems aim to sample way above the Nyquist frequency so that even with their imperfect filters, there is negligible signal above the Nyquist frequency and therefore no aliasing going on. So that leaves quantization error, which you can make arbitrarily small and model as noise. If your quantization noise is much less than the inherent noise of the system, you won’t even notice it…

Agree, and that’s what the link Peter posted says also. I’m referring to the case where the signal has components above the Nyquist frequency. This happens all the time, especially where cost is an issue. Not everyone can build a multi-gigahertz ADC into their device.