I suspect you’re joking, but in case you’re not, here’s a restatement.
There’s a theorem (Nyquist, Shannon, Whittaker) that says that if you have a signal (say, graphed with time on the x-axis and amplitude on the y-axis) that has no frequency components higher than some bound B (say 20kHz), then the signal can be reconstructed by samples at a rate 2B or higher.
But the theorem assumes that the samples are exact and not approximations, as is the case in practice with digital sampling (because the y-value is from some finite set, like 0 to 65535). This difference is quantization. There are also effects due to the imperfections of physical A/D and D/A converters.
If the samples are taken at a rate less than 2B, the reconstruction can have spurious frequencies that were not present in the original signal, because there isn’t a unique solution to the problem of reconstructing the original signal from the inadequate samples. This is aliasing.
7 Likes