Agree, and that’s what the link Peter posted says also. I’m referring to the case where the signal has components above the Nyquist frequency. This happens all the time, especially where cost is an issue. Not everyone can build a multi-gigahertz ADC into their device.

]]>“So an analog signal at one point in time is more perfect than its digital sample. Also, the lower the sampling rate, the greater the error due to aliasing. This is because a discrete sampling method cannot capture changes in the waveform that occur between samples.”

This isn’t quite true. A smart guy named Nyquist (http://en.wikipedia.org/wiki/Harry_Nyquist) proved that if you have a band-limited signal, you can represent all of its information with a discrete-time approximation if you sample quickly enough. Aliasing only happens if your continuous signal has components above the Nyquist frequency. Typical systems aim to sample way above the Nyquist frequency so that even with their imperfect filters, there is negligible signal above the Nyquist frequency and therefore no aliasing going on. So that leaves quantization error, which you can make arbitrarily small and model as noise. If your quantization noise is much less than the inherent noise of the system, you won’t even notice it…

]]>Great article. I am talking about signals with components above the Nyquist frequency, as it mentions.

]]>