Why 24/192 is a bad idea?

^ Is now a bad time to ask about the effect different DACs/CDPs have on either 16/44 or 24/192 etc..., sound wise.? Would you rather have a $500 DAC playing 24/192 or a SOTA DAC playing 16/44?
 
How do you get below -160 dB with a 16-bit DAC? That does not jive with my admittedly limited understanding of the math...

The <-160dB I'm seeing is the floor sitting well under the harmonic distortion peaks. The peaks are between -105 and -110. The floor is just residual energy from the quantization noise not being 100% correlated to the input (as the quantized signal from the original 24 bit simulated DAC had a white noise component). If there was no white noise in the original at all, the quantization noise energy should be entirely in the peaks, to the limits of numeric precision anyway.
 
Last edited:
^ Is now a bad time to ask about the effect different DACs/CDPs have on either 16/44 or 24/192 etc..., sound wise.? Would you rather have a $500 DAC playing 24/192 or a SOTA DAC playing 16/44?
I tested that a few years ago in a series of blind tests I performed. I compared DVD-A players feeding a then state of the art Mark Levinson No 36S when playing CD versus the same title in DVD-A. In every case, the ML DAC improved on the on-board DAC but could not match the improved 24/96 Khz version of the same material. Compared to the internal DAC in the transport, the ML DAC closed half the gap but could not go all the way.

Put simply, a better DAC did produce better fidelity from 16/44.1 but could not get it up to 24/96KHz. So better source with less competent DAC wins over worse source with hero DAC.

Testing was done with my Stax headphones. The differences I found were subtle in all cases and more audible in some material more than others (usually in lower level notes in isolation and how they decayed into background noise).
 
^ Is now a bad time to ask about the effect different DACs/CDPs have on either 16/44 or 24/192 etc..., sound wise.? Would you rather have a $500 DAC playing 24/192 or a SOTA DAC playing 16/44?

I'd take the one without the non-obvious fatal flaw. Think back about all the DACs you've owned... the annoying flaw that's different in each one and you don't figure out until you've lost the receipt ;-)
 
Won't comment on the spaciousness of studio recordings but will say that there's nothing wrong with the dynamics in my view. Is the guitar as loud as the mandolin? Yes. Is that how it is in nature? No. But I don't think it's excessively compressed, I think it is mixed that way. Listen to the attack of those stringed instruments. It doesn't sound compressed to me. I could be wrong, though. Maybe someone who really knows compression can comment...Bruce?

Tim

I ripped the SACD and looked at the waveform and found it to be compressed with limiting. All peaks are at -0.3dBdsd. Definitely not racing stripes though.
 
I did not follow this, sorry...

Noise decorrelation (dither) adds random noise to the signal, typically (not always) at the lsb level, and was originally added to help mask the correlated quantization noise floor and make it sound more "analog".

As others have correctly pointed out, dither has absolutely nothing to do with masking. It has everything to do with decorrelation which has the effect of making an undesired sound seem more like random noise. So score a lot less than 100% understanding of dither.


In my mind, the situation is opposite what you stated, so I must be off-base. D.c. is the opposite of dither since it is non-random by definition,

OK, you paraphrased my post pretty well, yet you claim to not follow. If paraphrasing isn't following, what is? ;-)

rounding has nothing to do with dither (again opposite since rounding correlates the lsbs better to the bit reduction)

I see a truism that is a straw man since it addresses an issue that I never raised, and an unsupported assertion.

and truncation adds distortion since lsb information is lost.

Here's where a poor understanding of decorrelation really strikes home.

Truncation does add error. If the error is correlated with the signal, then it is distortion. If it is not correlated with the signal than it is not distortion but is something else. If the error is random or nearly so, then its audibility is determined by its spectral content, but in general it is already far less audible than a non-random signal (e.g. distortion). One of the interesting twists of the game is that we can manage the spectral content of the error so that it far less audible.

If you look at how dither or other forms of decorrelation are usually done, they generally first decorrelate the error in the source data that will be created by truncation, and then do the truncation. So saying that the truncation adds distortion isn't totally true. It adds error but the error need not be distortion.

Also, if you look at, measure or listen to the raw error that is in the signal after an unassisted truncation, particularly of test tones that are analogs of music, it can be really bizarre. It can go up when the music goes down. It's often nothing like THD or even IM. But, its nothing like noise, either. Since it can be nothing like noise, the idea that just a tiny amount of dither noise no larger than the distortion itself can make it instantly become so sonically benign, pretty well kills any idea that the purpose of dither is masking.

I suppose dither might help truncation errors by decorrelating them, that would make sense.

So now you stumble around the truth but continue to demphasize THE BIG IDEA?

My models start ideal and then I add artifacts as required (dither, jitter, threshold errors, etc.) For the issue I was looking at in the other thread I did not want dither.

I hate to sound harsh but I see fundamental and deblitating misapprehensions. They get my blood flowing too fast, especially before my morning bowl of Aldi ersatz Cheerios and real milk. ;-)
 
Definitely!! :D

A very strange idea, given that PCM can get arbitrarily close to the mic feed, aside from the same basic limitations that afflict DSD.

Also note that much actual recording equipment that alleges it uses DSD starts out with a PCM-like conversion. This avoids some inherent problems with DSD that Vanderkooy and Lipshitz have pointed out and nobody can effectively rebut. In fact when V&L pointed out some of the inherent problems with pure DSD at the AES, Sony jumped right in and said that they didn't use pure DSD.

Also, it doesn't take much in the way of DBTs to raise way too many relevant questions about these claims.

The usual audiophile response to all of the above is to shoot the messenger, so I'm waiting for that dance to begin. ;-)
 
I'd take the one without the non-obvious fatal flaw. Think back about all the DACs you've owned... the annoying flaw that's different in each one and you don't figure out until you've lost the receipt ;-)

The biggest problem with most of the DACs I've bought is that I was comparing them to the best DACs I already had, which were already sonically transparent. Therfore, they couldn't make many real audible improvements.

The most egregious audible problems are almost always in the music that went into the ADC that started the journey through digital land. These days those are generally really good, and even if they weren't, audiophiles and mastering engineers can't do much about them but live with them!
 
That is not how noise shaping is talked about or implemented. The two are distinct operations with one shaping the quantization noise and then dither neutralizing distortion. http://en.wikipedia.org/wiki/Noise_shaping

"Noise shaping is a technique typically used in digital audio, image, and video processing, usually in combination with dithering, as part of the process of quantization or bit-depth reduction of a digital signal. Its purpose is to increase the apparent signal to noise ratio of the resultant signal.
[...]

Noise shaping works by putting the quantization error in a feedback loop. Any feedback loop functions as a filter, so by creating a feedback loop for the error itself, the error can be filtered as desired. The simplest example would be:
ef1dce537218273f5801be5312970a99.png

[...]
Noise shaping must also always involve an appropriate amount of dither within the process itself so as to prevent determinable and correlated errors to the signal itself. If dither is not used then noise shaping effectively functions merely as distortion shaping — pushing the distortion energy around to different frequency bands, but it is still distortion. If dither is added to the process as:
36d84c6207fa91515f9d0f1664aadcf6.png
"


So it is clear that the two concepts are distinct and spoken about as such.

Now if you mean dither with different probability distributions, then that is not the use of it by Bob or me in the context you quoted me.

Ahh, yet another Wikipedia article that awaits my corrections. ;-)

The history of noise shaping is that it was discovered, probably well before anybody put the quantization error into any feedback loop.

Someone thought: "Since people are complaining about hearing the dither noise, why not shift its spectral contents up where the ear doesn't work so well?"

Why do you think they call it noise shaping? They call it noise shaping because it was first done by shaping the spectrum of dither noise they added before the quantizer.

If you actually do this, the shifted dither signal shifts the spectrum of the quantization noise. This happens even if your quantization amounts to simple truncation with no feedback loops or nuttin'.

It turns out that even a low level 21 KHz sine wave can do an almost usable job of messing with the quantization error so that it isn't so ugly sounding.

One finds these things out if one thinks outside the box and gets their hands dirty... ;-)
 
arnyk: You said:

A DC offset is unconditional and can easily be disqualified as being dither. Rounding is conditional and just may be random and uncorrelated enough to count as dither.

Realizing I am clearly your inferior and too stupid to continue, nevertheless when you said "can easily be disqualified as being dither" I read that you were inferring a d.c. offset could be disqualified from consideration because it was dither. Reading your kind appraisal of my assessment, I conclude you meant d.c. is not dither, thus we are in agreement?

Masking I explained in an earlier post. As I said, I am not a scientist and use words the way I learned them in context decades ago. I am not sure that implies an ignorance of noise decorrelation but whatever.

Rounding you mentioned in your post and that is what I responded to in my reply. When I went to school rounding and truncation were correlated to the signal, at least if I remember expectation functions and the like. However, a dirt-floor ag college or LA party palace is clearly no match for your august training, and I am way past my prime.

As for truncation error vs. distortion, from some points of view any error is distortion of some sort. Clearly not yours, nor even mine much of the time, but at this point I believe I should have followed my first instinct to stay out of this thread and leave it to the experts such as yourself and try to glean what knowledge I can from the discussion that follows.

This is insane...

p.s. xiphmont -- Please send a PM with your email address and I will send my program, or at least the key steps, to try and figure out what I am seeing for spurs and noise floor. Somehow we are really off, but it is likely (based upon this thread) simply something stoopid I am doing and I would really like to know what.
 
The biggest problem with most of the DACs I've bought is that I was comparing them to the best DACs I already had, which were already sonically transparent. Therfore, they couldn't make many real audible improvements.

The most egregious audible problems are almost always in the music that went into the ADC that started the journey through digital land. These days those are generally really good, and even if they weren't, audiophiles and mastering engineers can't do much about them but live with them!

Here's where I'm still confused. You're saying that comparing a direct microphone feed to an ADC - DAC cycle of that feed should show no audible difference, and yet that is generally untrue. Very small differences perhaps, but consistently noticeable differences nonetheless.
 
Here's where I'm still confused. You're saying that comparing a direct microphone feed to an ADC - DAC cycle of that feed should show no audible difference, and yet that is generally untrue. Very small differences perhaps, but consistently noticeable differences nonetheless.

When I do this level-matched, time-syched and DBT, my listeners score random guessing. Also true of a number of other experimenters. True for any audio source, including a live feed. True for many generations of re-recording, even with fairly humble converters.

I listen to live feeds from microphones for days at a time when I record band and choir festivals. My recordings sound nothing like what I hear from my recording desk which is typically just a few feet from the mics. That's true of the output of the mic preamps, and its true of the recordings I make. I know enough about mics and acoustics to understand why.
 
The biggest problem with most of the DACs I've bought is that I was comparing them to the best DACs I already had, which were already sonically transparent. Therfore, they couldn't make many real audible improvements.
........!

Well, obviously not transparent enough judging by your lack of hearing differences in the file I linked to. The majority of people who have listened to these files have identified them correctly in blind tests. I guess they must have more transparent systems than you or Ethan??

There's nothing to PM you about. The files sound the same to me, and for the ones that are different those differences are about 48 dB down. What are these files supposed to show, and how do they prove that bit depth affects more than the noise floor?

--Ethan

My results are the same. I was under the impression that the samples were about reconstruction filter pre/post echo. If they are, they prove my point because they don't show any artifacts as large as what the article their provider cited as being audible, not by a country mile!
 
When I do this level-matched, time-syched and DBT, my listeners score random guessing. Also true of a number of other experimenters. True for any audio source, including a live feed. True for many generations of re-recording, even with fairly humble converters.

I listen to live feeds from microphones for days at a time when I record band and choir festivals. My recordings sound nothing like what I hear from my recording desk which is typically just a few feet from the mics. That's true of the output of the mic preamps, and its true of the recordings I make. I know enough about mics and acoustics to understand why.

And yet, when others (apparently not of your acquaintance) do the test, or when mastering engineers are tested, no data storage system, digital or analog, is transparent to the mic preamp output??
 
I have never found ADCs and DACs to be 100% sonically transparent. There are always differences.
 
Well, obviously not transparent enough judging by your lack of hearing differences in the file I linked to. The majority of people who have listened to these files have identified them correctly in blind tests. I guess they must have more transparent systems than you or Ethan??

Blind or double blind?
 
And yet, when others (apparently not of your acquaintance) do the test, or when mastering engineers are tested, no data storage system, digital or analog, is transparent to the mic preamp output??

Two words: Double Blind. IME claimed far more often than actually delivered, and often not claimed at all. Of course that implies level matched and time synched.
 
I have never found ADCs and DACs to be 100% sonically transparent. There are always differences.

Please explain why that happens when technical tests show any artifacts of many good DACs to be orders of magnitude below the now well-known thresholds of hearing.

Please explain all of the tests where signals were passed through > 10 ADC/DAC pairs effectively in series and not instantly heard by large numbers of listeners.
 
There's nothing to PM you about. The files sound the same to me, and for the ones that are different those differences are about 48 dB down. What are these files supposed to show, and how do they prove that bit depth affects more than the noise floor?

--Ethan

My results are the same. I was under the impression that the samples were about reconstruction filter pre/post echo. If they are, they prove my point because they don't show any artifacts as large as what the article their provider cited as being audible, not by a country mile!

Blind or double blind?
Same test as you did - downloaded unidentified files & no contact with me until a PM of results.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu