In "The Quest for Perfect Sound" (The New Republic, 1985) Edward Rothstein attempts the most beautiful, wondrous and passionate explication of high-end audio ever written. Discussing the philosophical differences between analog and digital, Rothstein writes (quoting partially another article):
"Analog builds models on the optimistic assumption that our modeling technology is infinitely perfectible. . . . Digital, on the other hand, chooses a level of perfection -- the sampling rate -- not approximating perfection but perfecting our approximations."
That of course is debatable. In 1985 digital audio was not well understood by non-experts in that field, and it still is not. See Amir's post:
Hi Bob. I read a ton of articles about audio yet you manage to unearth links I have not seen
. So I read the first one and unfortunately it is completely wrong. It has this common graph:
What he shows about digital is just flat our wrong. Take a 1 Khz sine wave, convert it to digital, and playing on a CD player. The look at the output from the analog jack on the CD player. It will look just like the original waveform on the left. It never, ever looks like what he is showing. If it did, you would think someone, some place, would have measured a CD player outputting such, rather than a graphic created in a paint program.
So the Nyquist theorem would be correct: a sampling at the 44.1 kHz sampling rate would produce an analog signal at the CD player output up to at least 20 kHz *) that is indistinguishable from the analog input, a perfect "original sound wave" and not at stair-step signal. And with an oscilloscope it can easily be demonstrated that this is, in fact, true (I have seen video demonstrations of such).
Thus, according to the Nyquist theorem digital is not designed to
approximate perfection but to
represent perfection. So the writer that you cite, Ron, is fundamentally wrong as far as his statement in principle goes (this writer probably had a stair-step output signal in mind).
The question of course is if all what the Nyquist theorem strives to represent in fact also holds with complex music signals. Proponents of digital perfection hold that of course this is true, since any music signal is just a bunch of aggregated sine waves. Detractors say that it cannot be true but that during the actual processing something must go wrong or that the Nyquist theorem somehow does not hold for complex signals.
Personally, my ears tell me that digital has not yet achieved the perfection that should be inherent in the technology -- yet significant progress has been made. Early implementations of digital technology were blissfully unaware of jitter, for example, and over the decades proper clocking technologies have significantly propelled digital sound quality forward. Similarly, the inherent technical limitations of and distortions by brickwall filters had not been recognized in early implementations, and filtering has made great strides as well, e.g., by upsampling of the signal. And all these things, of course, have little to do with the original sampling rate (upsampling just enables a filtering trick -- implementation of a much shallower filter; in most modern cases it is not intended to be a 'compensation' for inferior sampling rate).
In any case, the question is if the underlying theory of digital indeed has problems, or if we just have not found the perfect implementation yet. At this point it remains an open question. Fact is that many early detractors of digital have been astonished at the progress made over the decades, and the end does not yet seem in sight. The open question will continue for a while.
Yet to repeat, it is clear that the author you cite is wrong when he says, again, in 1985 when the true nature of digital processing was not yet more widely appreciated, that "digital, on the other hand, chooses a level of perfection -- the sampling rate -- not approximating perfection but perfecting our approximations."
At least in theory, digital is not about approximating but about representing perfection.
___________
*) and no, I do not buy into the nonsense that humans somehow can perceive frequencies above 20 kHz, and thus digital is inherently flawed due to this. Humans cannot hear beyond 20 kHz (actually, mostly only newborns can hear that high anyway), just like they cannot perceive UV light -- while bees can and thus see flowers in gorgeous colors that we cannot, under any circumstance, perceive (I once saw a stunning simulation of what that would look like for us humans).