Why 24/192 is a bad idea?

Bob is using unshaped (white) TPDF dither. Though it's one possible 'proper' dither in the mathematical sense, this isn't the dither a professional would use if there were alternatives; even several low-power 'improper' Gaussian dithers will outperform it in practice.
What kind of "professional" would that be? Here is the help file for Adobe Audition which supports shaped Guassian and TPDF dither: http://help.adobe.com/en_US/Auditio...=WS58a04a822e3e5010548241038980c2c5-7f4c.html

"Usually, Triangular p.d.f. is a wise choice because it gives the best trade?off among SNR (Signal?to?Noise ratio), distortion, and noise modulation. " Yes you can modify the the noise profile some but without proper noise shaping, you are not going to magically solve this problem with 16 bit encoding without noise shaping.

That, and you easily gain 30dB+ at the midrange 'dip' with shaping, which Bob has't done. Most good dithers follow the ATH.
Bob *has* done that. And I noted it in my response to you. This is the graph from Bob's paper:

i-5fKMQmd-X2.png


Indeed he spends considerable amount of time discussing this topic. This is noted even in the summary that I post from his paper and repeat here:

"CONCLUSIONS
This article has reviewed the issues surrounding the transmission of high-resolution digital audio. It is
suggested that a channel that attains audible transparency will be equivalent to a PCM channel that
uses:
· 58kHz sampling rate, and
· 14-bit representation with appropriate noise shaping, or
· 20-bit representation in a flat noise floor, i.e. a ‘rectangular’ channel"


I am compelled to think you have not read his paper after all.

There's also a good case for using no dither at all. You get two bits back over Bob's figure.
But if the capture is done at 24 bits which is usually the case when music is recorded/mixed these days, you would be creating distortion products due to decimation to 16 bits. If you captured and stayed in 16 bit mode then yes. But such is not the case.
 
Adding on, the problem is that people making music are experts at making music. Expecting them to even come remotely familiar with the mathematics of dither and noise shaping is not in the cards. This is why I feel it is important that we don't leave this to chance. We should get the high res bits as they are and then we can convert them if we need them. And then if we do it wrong, we can redo it. But if we get misconverted bits from the source, we are cooked and no way of getting back what should be.
 
Correct. 16 bits builds in some extra room, "just in case", just as audiophiles suggest we should.

Just curious, have you looked at the measurements Paul Miller/Keith Howard do regarding hirez music?
I must admit I am more interested in pushing the filters further up the FR than 24 bits, but their measurements show that a good 24 bit high res music file may be beneficial.
They have produced their own tool that helps to provide goood analysis of a track for both peak and average music amplitude across the complete frequency range for the whole track.

The way they do the measurement is very interesting as upsampling and downsampling stands out very well, along with the quality of the recording.
Also along with testing to prove with a file is native 24-bit or been converted from lower resoultion.

Anyway, can I suggest all the talk about audibility and perception stop as we have had so many of those discussions in the past that go nowhere,this thread could be interesting if it stuck to the topic; 24/192 and whether it is a bad idea.
Amir, I appreciate you are just responding with more facts but hopefully this is a subject that can be opened/put in a new thread on, or that an admin does if it rolls on with counter the counter argument - which I think it will :)

Thanks
Orb
 
If the recording industry has failed with the CD medium, what makes you think higher resolutions will solve the problem? There already are 24/386 DACs out there.
 
If the recording industry has failed with the CD medium, what makes you think higher resolutions will solve the problem? There already are 24/386 DACs out there.
Because we have a snapshot every month with analysis and measurements on high res in publications such as Hifi News, which also points out as I mentioned before those that are poorly implemented-poor processes-etc.
That said there are also excellent recordings they measure and analyse, just we remember the bad more than the positive.

Cheers
Orb
 
I would be against another physical format as well but if Neil Young or Apple (or both) can make a good sounding file format of high resolution then I think that will likely take off.

Hirez benefits are real and substantial so I want to experience as much hirez as possible. I just love the feeling of getting "sucked into" the music and enjoying the performance and forgetting all about this gear stuff. That's the sign of good playback imho.

High resolution (streaming) digital has strong headwinds against it becoming widely accepted. But I think it could be very successful in more narrow markets, particularly luxury automotive. Ultra quiet cabins and ultra premium sound systems create an appreciative audience. But stuffing 4608 Kbps or more into a moving vehicle remains an unsolved engineering challenge.
 
Anybody got links to the Oohashi study itself?

It's a for-pay article on the AES web site. But save yourself the trouble, this one explains what's wrong with Oohashi:

http://www.aes.org/e-lib/browse.cfm?elib=10005

From the summary:

Shogo Kiryu and Kaoru Ashihara said:
When the stimulus was divided into six bands of frequencies and presented through 6 loudspeakers in order to reduce intermodulation distortions, no subject could detect any ultrasounds. It was concluded that addition of ultrasounds might affect sound impression by means of some non-linear interaction that might occur in the loudspeakers."

BTW, where's Barry?

--Ethan
 
BTW, where's Barry?

--Ethan
Ethan, please refrain from singling out members. Focus on the technical topic rather than challenging members to a dual. If the other poster wants to respond, he will. If he does not, he means to ignore you or simply not reading the thread. Please don't agitate members and challenge them to a dual. We have higher standards in this forum. Please kindly note that the next warning will be more formal.
 
Hi rbbert,

An excellent illustration about why people who care about sound quality (audiophiles?) tend to dismiss this whole argument about why 24/192 is unnecessary.

From my perspective, that's the strange thing. If folks don't hear a benefit (or choose to make their determinations based on measurements they are comfortable with), my feeling is they should save their money and purchase the non-high res versions of whatever they want. It is when folks attempt to make determinations for others that I have to wonder about them. (Reminds me of the efforts made years ago to "legislate audio" by attempting to block ads for audio cables.)

To some of us, there are clear, immediate and obvious audible benefits to longer word lengths and higher sample rates. As I said in my first post in this thread, rather than argue with those who tell me I'm "wrong", I prefer to continue enjoying listening to and recording at high resolution.

Best regards,
Barry
www.soundkeeperrecordings.com
www.barrydiamentaudio.com
 
It's a for-pay article on the AES web site. But save yourself the trouble, this one explains what's wrong with Oohashi:

http://www.aes.org/e-lib/browse.cfm?elib=10005

From the summary:

--Ethan
I don't have a dog in this hunt but it seems to me, if speakers do sound different in the audible band when there are ultrasonics, and we then truncate the frequency response before we deliver the bits to the consumer, then that is another reason to not do that! :) Give the darn bits to the consumer -- with or without ultrasonic intermodulation.
 
...

BTW, where's Barry?

--Ethan

Barry has stated his position quite clearly, and if I were he would see nothing worthwhile in re-stating it.

It's good to see Amir carrying the fight, and those of us long ago sold on the benefits of higher resolution (than 16/44.1) digital audio certainly appreciate more concrete scientific and engineering data to support the obvious audible improvements. I doubt any of it will change the skeptics' minds.
 
Wasn't there a paper showing the effect of ultrasonics in the audio band was due to mixing and modulation in the speakers that created audio-band spurs? Times like these I wish I'd kept my AES membership...
 
Not really. I am not sitting there listening to a tone at 120 db. A transient may just last a few milliseconds. Home theaters routinely hit 100+ db. THX spec for example requires 105 db. I am not seeing warning signs on such equipment saying you are going to go deaf.

I never said you're going to go deaf. What I'm saying is that if you have audio playing at 120 dB (or even 90 dB for that matter), you're not that likely to be able to hear anything at the ATH level. And even if you could, I'd like to see the loudspeaker that has a low enough THD/IMD spec to avoid adding distortion that completely buries your LSBs.

That being said, you have an "easy" way of settling the issue for good. Can you find a realistic recording (e.g. existing commercial stuff, not something you make up just for this argument) where one can hear the difference between 24 bits and 16 bits? Could be a "commercial" (i.e. something you can buy in a store) music recording, a movie, ...
 
I don't have a dog in this hunt but it seems to me, if speakers do sound different in the audible band when there are ultrasonics, and we then truncate the frequency response before we deliver the bits to the consumer, then that is another reason to not do that! :) Give the darn bits to the consumer -- with or without ultrasonic intermodulation.

"There's a difference" doesn't mean it's better. In this case it's worse because you have inter-modulation distortion ending up in the audio band. And even if you like hearing that distortion, you can easily add it as a post-processing step and then down-sample to 48 kHz. That'll at least have the advantage that all speakers will sound the same instead of all having different distortion patterns. In fact, if you consider that IMD to be good, then you have the case that the higher the quality of your speakers, the less you benefit from the effect of ultra-sounds (because of reduced IMD).
 
To some of us, there are clear, immediate and obvious audible benefits to longer word lengths and higher sample rates. As I said in my first post in this thread, rather than argue with those who tell me I'm "wrong", I prefer to continue enjoying listening to and recording at high resolution.

Best regards,
Barry
www.soundkeeperrecordings.com
www.barrydiamentaudio.com


I for one, do not consider you wrong, and enjoy your posts. And I am glad you are advocating higher res. I must commend on your comment that if someone cannot hear hi-res, then by all means do not buy it. That said, I do not think hi-res is a cure all to the digital world.An I do not tink it is "better" than the CD medium. I have heard recordings (I'm sorry I can't recall which ones) where to me 96 sounded better than 192 recording of the same material. What I am going to do (if anyone cares :)) is buy a non-upsampling DAC that only does native processing, but to cover all my bases, can accept up to 192.
 
Last edited:
^^^ If you were me, that would be the best way to ensure 192 will be scrapped for 384 before the VISA bill was paid off... :)

The biggest variation in recording quality is IMO the source, mix, and mastering, not the resolution and rate of the playback. That said, I would hope hi-res recordings, like most higher-end products, reflect greater care in the recording and/or mastering process.

I have CDs of old records and it is clear that whomever did the CD mastering should be shot, or even worse forced to listen to the mix at realistic levels on a good system vs. a boombox or car stereo. Of course, they might not notice the difference... :(
 
I for one, do not consider you wrong, and enjoy your posts. And I am glad you are advocating higher res. I must commend on your comment that if someone cannot hear hi-res, then by all means do not buy it.

Well, maybe you can help us by providing a short clip where 24/192 sounds audibly better than a 16/48 downscale from the same master?

That said, I do not think h-res is a cure all to the digital world.An I do not tin it is "better" than the CD medium. I have heard recordings (I'm sorry I can't recall which ones) where to me 96 sounded better than 192 recording of the same material. What I am going to do (if anyone cares :)) is buy a non-upsampling DAC that only does native processing, but to cover all my bases, can accept up to 192.

Sorry to disappoint you but these days, most (all?) DACs I know about are actually sigma-delta ("1-bit") and they up-sample to the MHz range. Not that there's anything wrong with that.
 
"There's a difference" doesn't mean it's better. In this case it's worse because you have inter-modulation distortion ending up in the audio band. And even if you like hearing that distortion, you can easily add it as a post-processing step and then down-sample to 48 kHz.
No I can't add it. You seem to be saying a lossy compression is lossless here. I want to hear whatever distortion was there when the talent approved it in the studio. Once you take out the ultrasonics that were there at the time the music was produced, I have no way of putting that signal dependent, originated from instruments, etc, back in. You can't unring the bell as the saying goes.

So it is not in this case about being better or worse but being as faithful as I can to what is the final product. Right now, if someone uses a tube pre-amp for the mic, there is distortion there that I also get to reproduce. In this instance though, I am told that I am better off hearing said mic without its tube distortion.

That'll at least have the advantage that all speakers will sound the same instead of all having different distortion patterns. In fact, if you consider that IMD to be good, then you have the case that the higher the quality of your speakers, the less you benefit from the effect of ultra-sounds (because of reduced IMD).
That is a problem I can't solve (differentiation between my speaker and what was used in the recording). But I can avoid adding more to it by decimating things for no good reason.
 
I never said you're going to go deaf. What I'm saying is that if you have audio playing at 120 dB (or even 90 dB for that matter), you're not that likely to be able to hear anything at the ATH level.
I am not "playing at 120 db." My average level is far, far lower. But the system has dynamic range that allows it, for moments in time, to peak that high as existed if I were at the live venue. If that is too loud for me, then I can turn the volume down.

If you are so worried about this issue, why not campaign to have amplifiers no more than certain watts? And volume controls with limiters in them?

And even if you could, I'd like to see the loudspeaker that has a low enough THD/IMD spec to avoid adding distortion that completely buries your LSBs.
This is a forum argument that is constantly put forward without proper foundation. There is no assurance that quantization noise automatically gets masked by THD of the downstream devices. Distortions are additive. One doesn't replace the other.

Besides, the goal here should not be how crappy we can make it before the customer cries. We already have the CD. Question is, can we step it up some so that we know we have headroom and ample proof that we have built a system in all cases in which it may get used, is transparent. We have the technological know-how to do that. You seem to be on a campaign to say we should not go there yet have not produced any economical or other reasons why we should. If the studio is producing in 24 bits, why should I care to decimate it? So what that I may not hear the extra headroom? Are you going to tell me how small of a house I should live in next? I say that half seriously but I hope you see the issue here. I have repeatedly made this point: tell me why I should listen to you. Make me a business case for this. Not just tugging at the quality bar to see how low you can make it and stil get away with it.

When we had dial-up modems, we killed ourselves to get music in 20kbps channels. A decade later, we don't have to do that and people routinely stream multi-megabit/sec video through their Internet channels. Their hard disk based systems can stream (locally) dozens of audio channels in highest resolution. And the cost to store the bits is nothing compared to the cost to buy the music. Tell me why I should not move with the times. As you, I used to care about squeezing all the bits out. But now? I don't live in the past anymore.

It is like you running a bank and deciding that anytime I go to withdraw $200, you are going to take 5 cents out because it won't matter to me. That might be right but folks are sensitive about that. Go and convince the pros to never use anything more than 16 and you may have a case there. But until they do, and the final product as the talent approves is 24 bits, then give them out if the labels are willing to do it and customers want it. You can take those bits and decimate them to 10 bits if you like and none of us would care or shed a tear :).

That being said, you have an "easy" way of settling the issue for good. Can you find a realistic recording (e.g. existing commercial stuff, not something you make up just for this argument) where one can hear the difference between 24 bits and 16 bits? Could be a "commercial" (i.e. something you can buy in a store) music recording, a movie, ...
I provided the AES research data on dynamic range of real concert halls, the ability to capture the same, and the noise floor of listening rooms. That was not good enough for you? If not, why would further data?

Have Ethan introduce you to his friend and forum member, Basspig and you will get real education on how much dynamic range is desired and available to folks :).

But again, this is not about proving what is audible. It is about you proving why decimation of the bits *must* occur as if there is some harm to me. You have not made any case for that. After all, if there is decimation needed, I can apply it using my favorite style of dither and noise shaping. I personally am very comfortable with Bob's recommendations that we could even go down to 14 bits with proper signal processing. But based on all the arguments I have had with people in this area, I am confident that few people understand and know how to perform the proper conversion. So I say don't do it. Give the bits. Get out of trying to justify why decimating is good for me. And let's move on with life and pursuit of real knowledge that matters.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu