Redbook 44.1 kHz standard: theoretically sufficient timbral resolution?

The sort of topic of this thread is about timbre, sounds that are similar in ways besides pitch and loudness. Its about the steady part of the tone and the spectrum change over time and the initial attack as far as what helps us decode timbre. We can tell timbre from a tone that has the fundamental filtered out by the ears ability to pattern recognize. So, I would say that redbook can do pretty good at it, for guys that cant hear past 10 or 12 kHz anyway, for those that can hear out to 20khz, and the timbre has these frequencies, then yes, we all know that redbook is close but not fully there and that guy might know that timbre has changed some due to the cutoff at 20khz or so in redbook.

Yes, if we 'believe' in the theory of sampled digital audio (with dither) then it should be pretty straightforward: we lose everything above 20 kHz so dogs and a small proportion of young children may spot a change - but surely they will notice much bigger changes of timbre deriving from the phase distortion of uncorrected speakers. Again, if we 'believe', then the only other artefact is a tiny, tiny amount of random noise.

Personally, I do 'believe' the theory. For the believers among us, discussions about high res versus CD format are moot - the only difference is whether the amount of random noise is very very tiny, or very very very tiny. I believe so strongly in the theory, that if I were to hear a difference between CD and high res, I would 'know' that it was either my ears/mood/expectation bias, or that one of the systems was defective in some area not related to bit depth or sample rate. Life is pretty simple when you accept the science!
 
Sample rates upper 44,1 kHz intended not for increasing of resolution.

Ideal low frequency filter restore audible range perfectly for 44,1 kHz.

However, no ideal filters, as analog as digital. Especially analog.

Better result (easier achieve) give real filter (analog and digital) at high sample rates.
 
Sample rates upper 44,1 kHz intended not for increasing of resolution.

Ideal low frequency filter restore audible range perfectly for 44,1 kHz.

However, no ideal filters, as analog as digital. Especially analog.

Better result (easier achieve) give real filter (analog and digital) at high sample rates.

Does it really matter if something is difficult or easy, as long as it works and that once designed, it can be burned into a chip and sold for $0.50?
 
Here only matter of easier realization. Easer almost equal cheaper.

For easy life of developers :)

There are 2 ways:

1. Steep 44 kHz filter

2. Lesser steep 88 kHz and above

Way #1 demands analog filter 0 ... 110 dB in transient band 20..22 kHz for CD.
Or 0 ... 145 dB in transient band 20..22 kHz for 24 bit.
Both almost impossibly.

If go further: it is trouble even for real time hardware oversampling filter into DAC.
 
Sample rates upper 44,1 kHz intended not for increasing of resolution.

Ideal low frequency filter restore audible range perfectly for 44,1 kHz.

However, no ideal filters, as analog as digital. Especially analog.

Better result (easier achieve) give real filter (analog and digital) at high sample rates.

If that's the only problem, it's easy. Upsample the CD signal to high-res sample rates and use the same shallow filter from there as for hi-res. That's what my Berkeley DAC does, upsampling the CD signal to 4 x 44.1 kHz = 176.4 kHz and filtering from there with a gentle slope.
 
The sort of topic of this thread is about timbre, sounds that are similar in ways besides pitch and loudness. Its about the steady part of the tone and the spectrum change over time and the initial attack as far as what helps us decode timbre. We can tell timbre from a tone that has the fundamental filtered out by the ears ability to pattern recognize. So, I would say that redbook can do pretty good at it, for guys that cant hear past 10 or 12 kHz anyway, for those that can hear out to 20khz, and the timbre has these frequencies, then yes, we all know that redbook is close but not fully there and that guy might know that timbre has changed some due to the cutoff at 20khz or so in redbook.

So what are technical reasons that you put the cut-off of fidelity for the CD format at around 12 kHz? If I understand them correctly, the responses #15 by Groucho and #22 by Amir (thread pages 2 and 3) suggest that there should be fidelity up to 20 kHz, at least theoretically. Practical implementation is always a different issue.
 
If that's the only problem, it's easy. Upsample the CD signal to high-res sample rates and use the same shallow filter from there as for hi-res. That's what my Berkeley DAC does, upsampling the CD signal to 4 x 44.1 kHz = 176.4 kHz and filtering from there with a gentle slope.

Almost all DAC do oversampling and digital filtering before feeding of analog filter.

Rare DAC allow switch off intrenal filtration for use more sophisticated computer processing.
 
Almost all DAC do oversampling and digital filtering before feeding of analog filter.

Rare DAC allow switch off intrenal filtration for use more sophisticated computer processing.

But what is this more sophisticated computer processing? Where has it been identified that there is a need for anything more sophisticated than the basic 'perfect' filter? Are we talking about 'apodizing' and 'minimum phase'? Does anyone else here think, like I do, that these may just be solutions (and further worry and confusion for audiophiles) to an imagined problem that doesn't actually exist?
 
Groucho you did read what has been said so far?
Please explain the definition of a perfect interpolation/reconstruction filter for CD quality when it has constraints in both time and frequency domain when you seem to consider minimum phase as not doing something better (been mentioned earlier that it does) compared to a brickwall filter (and then what is the perfect stopband attenuation)?
There is no such thing as a real-world perfect filter, especially for CD quality where it is at its most compromised (relative to higher res), for reasons that have been touched upon in this thread and in much greater detail in the past.
It seems you are one of those that does not put any weight behind works by those such as Gerzon/Craven/Stuart and those that expand upon core theory such as from Dunn/Hawksford/Lipshitz/Vanderkooy/etc....
Cheers
Orb
 
But what is this more sophisticated computer processing? Where has it been identified that there is a need for anything more sophisticated than the basic 'perfect' filter? Are we talking about 'apodizing' and 'minimum phase'? Does anyone else here think, like I do, that these may just be solutions (and further worry and confusion for audiophiles) to an imagined problem that doesn't actually exist?

The audiophile world is fat with solutions to imagined (or at least inaudible) problems It would hardly be economically viable without them.

Tim
 
Groucho you did read what has been said so far?
Please explain the definition of a perfect interpolation/reconstruction filter for CD quality when it has constraints in both time and frequency domain when you seem to consider minimum phase as not doing something better (been mentioned earlier that it does) compared to a brickwall filter (and then what is the perfect stopband attenuation)?
There is no such thing as a real-world perfect filter, especially for CD quality where it is at its most compromised (relative to higher res), for reasons that have been touched upon in this thread and in much greater detail in the past.
It seems you are one of those that does not put any weight behind works by those such as Gerzon/Craven/Stuart and those that expand upon core theory such as from Dunn/Hawksford/Lipshitz/Vanderkooy/etc....
Cheers
Orb

I would like the filter that provides the theoretical ideal reconstruction - or the practical implementation that comes closest to it. I don't want a non-flat frequency response in the audible range, and I don't want phase shifting at higher frequencies. I don't want ultrasonic hash.

I know that there is much discussion about various alternatives as faits accompli, but it is their origins I am more interested in. I looked up a paper by Dunn (someone mentioned in your comment above) from 1998 titled "Anti-alias and anti-image filtering: The benefits of 96kHz sampling rate formats for those who cannot hear above 20kHz." whose abstract says:

Reports that 96kHz sampled digital audio systems have greater transparency than those sampling at 44.1kHz apparently conflict with knowledge of the capability of human hearing. The bandlimiting filters required are examined for a role in producing these differences. Possible mechanisms are presented for these filters to produce audible artefacts and filter designs avoiding these artefacts are illustrated.

The paper proceeds on the basis that these "reports" are accurate, and makes some suggestions as to why the the standard filters in 1998 might introduce sonic artefacts, noting that more complex digital realisations could remove some of these limitations. Fine, but the damage has already been done, from the paper's misleading title, the unquestioning credence given to the "reports", and the highlighting of the theoretical drawbacks of 1998 technology. It's written by a brilliant person, no doubt, but ultimately it is just 'noise' I would say, in the development of audio. This paper doesn't convince me (and indeed doesn't actually say) that there is anything inherently wrong with the CD format, but for someone looking for a reason to trash CD's reputation the existence of this paper would be grist to the mill.

It's just the first example I looked at, but I have yet to see anything that convinces me that the high-res-alternatives-to-CD industry and the alternatives-to-mathematically-ideal-filter sub-industry are not both based on 'expanding' such flimsy origins.
 
Last edited:
I would like the filter that provides the theoretical ideal reconstruction - or the practical implementation that comes closest to it. I don't want a non-flat frequency response in the audible range, and I don't want phase shifting at higher frequencies. I don't want ultrasonic hash.

It's just the first example I looked at, but I have yet to see anything that convinces me that the high-res-alternatives-to-CD industry and the alternatives-to-mathematically-ideal-filter sub-industry are not both based on 'expanding' such flimsy origins.
So you actually want a mixture of minimum phase/linear phase filter.....
And you would call that messing around (it is not sub industry), I really really advise reading up on the papers from those names I mentioned, who are recognised as some of the best engineers out there, including from an academic perspective.
Again there is no such thing as a perfect real world filter, especially when dealing with audio of CD quality spec.
Anyway this has been said at length and in detail before, which is why I knew this thread would go nowhere as some feel "theoretical" theory answered everything perfectly back in early 1900 and 1940-50's, while ignoring modern theoretical theory from the 70's onwards.
So I take it you think Craven/Stuart/Dunn do not understand theoretical ideal interpolation/reconstruction as they have a different view than you on the perfect-ideal filter for CD?
I find it strange objectivists accept modern dither, and this theory and implementation evolved from early years to modern just like we see with oversampling and the associated filters; to see how modern dither is accepted look to Lipshitz/Vanderkooy and also Wannamaker (they are excellent/some of the best with dither theory).
Just very last point, Gibbs phenomenon/oscillation is integral to digital signal processing/Fourier, and comes back to filters.
Cheers
Orb
 
Last edited:
But what is this more sophisticated computer processing? Where has it been identified that there is a need for anything more sophisticated than the basic 'perfect' filter? Are we talking about 'apodizing' and 'minimum phase'? Does anyone else here think, like I do, that these may just be solutions (and further worry and confusion for audiophiles) to an imagined problem that doesn't actually exist?

"apodizing" in Meridian's sense is "minimum phase". There are several filters type, but for final resampling better FIR filter as linear phase as minimal phase. Thus, I think, allowed consider "apodizing"="minimum phase" in final audio processing context.

"Sophisticated" is:
- more length (more steep, less oscillations in passband, more supress in stop band, ...), and/or
- complex inside ("know how" certain realization of filter for faster work or decreasing of ringing, as example), and
- using double precision (64-bit floating point), especially for big number of calculations: minimizing quantization errors, avoiding of overload.

Using personal computer's (Win and Mac here) CPU power and huge memory (comparing FPGA and microcontrollers), parallel processing give wide possibilities for algorithm realization.

Also "sophisticated" algorithms can be faster and easier debugged under PC. I.e. more often improved.
 
Yes. 44100 kHz was technical compromisse for time of CD creation.

So you actually want a mixture of minimum phase/linear phase filter.....

Minimum phase filter have 0 pre-ringing energy, but double post-ringing energy. I think, it is one of reasons why not univocal preference of minimal phase filters.
 
Yes. 44100 kHz was technical compromisse for time of CD creation.



Minimum phase filter have 0 pre-ringing energy, but double post-ringing energy. I think, it is one of reasons why not univocal preference of minimal phase filters.

Agree there is not a univocal preference for any of the filters, some prefer minimum phase implementations while some happy with default brickwall type implemented by manufacturers for CD historically.
Probably comes back to no perfect real world design possible for CD, although IMO this becomes more blurred in terms of differentiation at higher sampling rates.
Another possible issue is whether implemented as a cascade design; some feel this is a possible issue-challenge especially when considering hardware and algorithm resource limitations historically for DAC chipsets.

Cheers
Orb
 
Agree there is not a univocal preference for any of the filters, some prefer minimum phase implementations while some happy with default brickwall type implemented by manufacturers for CD historically.
Probably comes back to no perfect real world design possible for CD, although IMO this becomes more blurred in terms of differentiation at higher sampling rates.
Another possible issue is whether implemented as a cascade design; some feel this is a possible issue-challenge especially when considering hardware and algorithm resource limitations historically for DAC chipsets.

Cheers
Orb

As brickwall I understand: transient band 20 ... 22 kHz, 0 dB passband, -200 dB stop band, sample rate up to 22,5 MHz (D512) (third generation of my resampling filters since 2009).

Such filter probamatically apply in real time on PC. Proper applying (design/debug) it on FPGA or on DSP-processors/microcontrollers is very big work. I suppose first appear some 4th generation of these filters :)

I have some ideas how simplify it, but need many researches/experiments for decreasing processing time and saving of precision and parameters.

Best regards,
Yuri
 
"apodizing" in Meridian's sense is "minimum phase". There are several filters type, but for final resampling better FIR filter as linear phase as minimal phase. Thus, I think, allowed consider "apodizing"="minimum phase" in final audio processing context.

"Sophisticated" is:
- more length (more steep, less oscillations in passband, more supress in stop band, ...), and/or
- complex inside ("know how" certain realization of filter for faster work or decreasing of ringing, as example), and
- using double precision (64-bit floating point), especially for big number of calculations: minimizing quantization errors, avoiding of overload.

Using personal computer's (Win and Mac here) CPU power and huge memory (comparing FPGA and microcontrollers), parallel processing give wide possibilities for algorithm realization.

Also "sophisticated" algorithms can be faster and easier debugged under PC. I.e. more often improved.
Good post and welcome to the forum Yuri :). Great to see contribution from people whose expertise is the specific area we are discussing.
 
Several people here have claimed that the CD sampling rate was a (significant) compromise. Yet I am afraid they engage in revisionist history from current hindsight and audiophile consensus. Yes, it may be that at the origin of digital audio some engineers said a much higher sampling rate is needed, but that does not appear to have been the consensus. In fact, the consensus appears to have been more or less Nyquist (double sampling rate over frequency range presented), and still is to a large part (outside of audiophile circles), see AES guidelines below.

From:
Telarc, Frederick Fennell, and an Overture to Digital Recording

Telarc, founded in Cleveland, Ohio, in 1977 by Jack Renner and Robert Woods, both of whom were classically trained musicians and educators, made its first two recordings in the then-typical direct-to-disc format. At the same time, Renner and Woods were inspired by the new digital recording technology of Tom Stockham’s Salt Lake City-based Soundstream, Inc., the first commercial digital recording company in the United States. Stockham, whom Renner calls “the father of digital signal processing,” had developed a 16-bit digital audio recorder using a high speed instrument magnetic tape recorder and demonstrated the recordings at the fall of 1976 AES convention. Renner and Woods formed a partnership with Stockham. They requested that he increase his digital system’s high frequency response, from 17 kHz to 22.5 kHz at a sampling rate of 50 kHz, an unprecedented level.

As you can see, Nyquist (obviously with a few extra kHz sampling space to allow for filtering).

From: https://en.wikipedia.org/wiki/44,100_Hz

In the early 1980s, a 32 kHz sampling rate was used in broadcast (esp. in UK and Japan), because this was sufficient for FM stereo broadcasts, which had 15 kHz bandwidth.

Again, Nyquist.

The current AES recommended practice for professional audio is 48 kHz, which is not too far from 44.1 kHz (2008, revised 2013):

Abstract: A sampling frequency of 48 kHz is recommended for the origination, processing, and interchange of audio programs employing pulse-code modulation. Recognition is also given to the use of a 44.1-kHz sampling frequency related to certain consumer digital applications, the use of a 32-kHz sampling frequency for transmission-related applications, and the use of a 96-kHz sampling frequency for applications requiring a higher bandwidth or more relaxed anti-alias filtering. This revision further quantifies the preferred choices for higher sampling frequencies. (8 pages)

Again, basically Nyquist, with a few extra kHz sampling space to allow for filtering.

Yes, there may have been a debate about 48 kHz vs. 44.1 kHz even in the beginning (and 44.1 kHz won because of available video equipment for recording), but here we are not talking about vastly different sampling rates -- that was not a 192 kHz vs. 44.1 kHz discussion, not by a long shot.
 
Al,
please note the context and also requirements for relaxed anti-alias filtering; which comes back quite a bit to what we are talking about - just look to what Yuri recently said.
48khz is still not quite enough, and why the default then is 88.2 or 96khz, same way PCM bit depth in reality only needed to be around 20bits (context audio PCM process from recording-editing/mixing-mastering) but simpler to standardise around 24bits.

Cheers
Orb
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu