Is digital audio intrinsically "broken"?

I enjoy an academic exercise as much as the next guy :), but once you're using the system, listening to music, audibility is the only thing that matters. And for that thing, there is a whole generation of chips out there that, when implemented properly, seem to inexpensively eliminate jitter for all practical purposes.

As far as "broken" is concerned, in the sense that you're using the word, yes, digital audio is broken. Of course in that sense so is the speaker.

Tim

The difference is that there is an ideal way doing protocol transmission/architecture for audio,unlike speakers.
Cheers
Orb
 
The difference is that there is an ideal way doing protocol transmission/architecture for audio,unlike speakers.
Cheers
Orb

The commonality is that it is all "broken." If all digital transmission were asynchronous, all audio reproduction would still be broken, and digital sources would not effectively sound better than well-implemented examples sound today. Just trying to find that fine line between "What's Best" and "What's Pedantic." :)

Tim
 
The commonality is that it is all "broken." If all digital transmission were asynchronous, all audio reproduction would still be broken, and digital sources would not effectively sound better than well-implemented examples sound today. Just trying to find that fine line between "What's Best" and "What's Pedantic." :)

Tim

Hehe now I like that last line, that is perfect for all hobby forums :)
Made me smile.
Cheers
Orb
 
The commonality is that it is all "broken." If all digital transmission were asynchronous, all audio reproduction would still be broken, and digital sources would not effectively sound better than well-implemented examples sound today. Just trying to find that fine line between "What's Best" and "What's Pedantic." :)

Tim

Tim,

You are trying to find something that does not exist per se. Such fine line does not exist, as each of us establishes its own line. The good think of foruns is presenting our "lines" and debating them, knowing from start we will never fully agree on all of them.
 
Your post enlightening us about why digital audio is "intrinsically broken" would have been much clearer and more accurate with that qualifier.
Well, clearly I did not think folks would be so taken back by the wording there. As a guy that specializes in Digital, I do not see criticism of it being an issue but I hear you on others reading more into it than I intended.

You wrote off digital audio based on an ancient interface (SPDIF) and one that isn't very good for audio (HDMI) while ignoring methods that are much better.
I did not ignore them. I was clear in what link was broken and in what way. I then made reference to other ways that it could be improved. S/PDIF and HDMI in the way I explained them is used to carry digital audio 99.99999% of the time. That was the focus on my article as to shine light to common method being based on an architecture that could have been better.

RME soundcard - I found the graph you included came from in which RME card was used. Here is what John Atkinson said about the DAC used in the jitter comparisons

"I chose the Assemblage because it appears to have the worst rejection of incoming datastream jitter of the DACs I had to hand.

Do you believe that any digital output source can produce high quality results with any SPDIF based DAC however cheap and poorly implemented.
Think about what you just said. If the architecture is not broken, why would the receiver need to not be cheap? Why do I need to worry about it being so sensitive to its *digital* input? Remember, you said Pro cards are good on PCs. How come that Pro card is putting out so much jitter that the receiver needs to reject? Do you think if I buy a cheap $50 external USB hard drive, it has far more errors than a $300 one as write my files to it? Of course not. We use those drives all day long and don't demerit them for being "cheap and poorly implemented." Jitter is always present on USB for data transfers but the system is designed in a way that once you achieve functionality, performance also comes with it. Not so with our common S/PDIF and HDMI connections.

Here is a link to Stereophile's review of the RME soundcard from 2000:

http://forum.stereophile.com/computeraudio/299/index.html

Overall jitter for toslink output to a Musical Fidelity X-24K D/A converter was described as " a low 248 picoseconds peak-peak". With a coax connection, " The jitter level has increased to a still-good 686ps." (I think that the Miller instrument Stereophile used in 2000 has been replaced by a higher precision model in the intervening years.)
Careful. There is an issue there. Anytime you use a non-isolated interface between a PC and another device, you could get system dependencies. That DAC likely saw a different signal than the instrument did with the RME card. This is why I always emphasize electrical isolation as an important consideration between PC servers and DACs. Clearly we have a proof point here that jitter as seen by that DAC is far higher than those numbers represent. Yet we know the DAC didn't introduce them since using a different interface sharply reduced their levels.

You said "That level of jitter is alarming to me especially coming from a pro sound card." I think your statement is not warranted given the context.
It absolutely is. You are taking the source in isolation where I showed the complete system end-to-end. The measurements showed exactly what I stated: alarming levels of jitter in that pairing. You are trying to infer from source measurement that this can't be because that device is supposed to be much better. Well supposed to be didn't work out in this situation because again, we have a broken system. If the digital interface was sending PCM samples with timestamps as to when to play them, do you think there would have been variability between testing the RME card by itself and connected to the DAC using two different methods? Or should I say if there were one it would be much smaller (reasons of which I won't dig into this minute as it is unrelated to this topic).
 
I enjoy an academic exercise as much as the next guy :), but once you're using the system, listening to music, audibility is the only thing that matters.
Tim
I disagree. That is not all we do with audio. We also come to forums like this and talk about how it works. My purpose in writing these posts is to correct misstatements there. If we get there, we are part of the way toward understanding each other's point of view. The notion that digital is perfect, is stated all the time as a fact and how the system works. As in the example of the 1080p to 720p display, it is absolutely important that we distinguish between those tow display technologies as we talk about them, even though some or even almost all of us sat too far to perceive the difference.

Once we agree on the science, we are also better prepared to test it for audibility. If one assumes digital audio is prefect in the way it is transmitted, one would not even think about testing that as a contributing factor to fidelity. Knowing how the system works is critical to verifying our assumptions about its audibility.
 
Amir, I've done a bit more research and would now tend to see the DIR9001 as a bit of a dud. And the reason follows from the second point I made in my previous post, "they not telling us something which is relevant", which is how sensitive the chip is to input jitter, which, after all, is what the device should be all about. People mention getting 100's of psecs jitter from this chip in a real circuit.

So I'll bring in my 2nd contender -- the Wolfson WM8804. This key aspect has been addressed in the chip, to the degree that the chip can be used to de-jitter an S/PDIF stream, it becomes a simple pass-through device to "fix" the S/PDIF signal. So for extreme cleaning you could daisy chain these chips, to get intrinisc jitter down to 50 ps, also. A document on their website, http://www.wolfsonmicro.com/documen...e_of_spdif_digital_interface_transceivers.pdf, details their advantages, with some interesting measurement screen captures ...

Frank
I encourage everyone who thinks S/PDIF is not broken read that paper. It is simply written and includes very nice measurements of common S/PDIF traceivers used in consumer products. It has nice intro such as:

"The WM8804/5 excels in meeting and exceeding these performance metrics. Notably, the
intrinsic jitter of the WM8805 is measured at 50ps, and the jitter rejection frequency of the
onboard PLL is 100Hz. This can be directly compared with competitive S/PDIF solutions
available today, with intrinsic jitter in the region of 150ps, and jitter rejection frequency greater
than 20kHz.

The point of this illustration is that competitive solutions do little to attenuate jitter. An audio
system including today’s generation of S/PDIF transceivers cannot attenuate jitter on
incoming signals, and adds a significant level of jitter to the overall system. This makes it very
difficult to design a system to pass industry standard specifications. "

I constantly hear that there is a PLL inside the receiver to get rid of clock jitter yet folks don't understand that like any filter, it has a certain response and unfortunately as the doc mentions, their filtering of jitter doesn't start until 20KH or higher. That allows the receiver to capture digital data but of course pass through jitter that is at audio frequencies.

Back to your device, yes it is a high performance implementation. This lousy interface of ours requires it :). I looked up the cost: it is $4 in single piece quantity and $2 in 1,000 pieces. That is crazy expensive in consumer electronics terms. Imagine an AVR with half a dozen of them for its input. To put that in context, you can buy a System on a Chip (SoC) that performs all the functions of a blu-ray player with 1 Ghz+ CPU cores, graphics subsystem and ability to decode three high-definition video codecs and a DSP to decode any number of audio formats in volume for $6! The budget for such interfaces is in cents for typical products.

As to cascading chips, be careful. It is actually quite simply to filter out jitter. Problem becomes lock time. If you switch inputs and it takes 5 seconds to hear audio, people get annoyed. Reducing lock time unfortunately changes the filtering ability.

Finally, you have to be very careful to not go by bench results of these chips. Putting them in real systems and subjecting often reduces their performance some.
 
Micro & Amir: Fair enough X 2

Tim
 
I did not ignore them. I was clear in what link was broken and in what way.

You were not clear in your original post and you have not read my responses with understanding since. Here is an example from your latest post:

> Think about what you just said.

What a condescending way to start.

> If the architecture is not broken, why would the receiver need to not be cheap?

I stated my opinion of SPDIF in my first reply to your posts. You have ignored what I said in that post and in later ones.

> If the architecture is not broken, why would the receiver need to not be cheap?

You have never stated your definition of the requirements for an architecture for digital audio. Your remarks suggest that it must be integrated with video architecture. You also seem to feel that motherboard output is important (or maybe HDMI from a graphics card).

Other people may not feel the need for the same set of restrictions that you do and may not have the same set of requirements. Until you state the context for your remarks, your pronouncements just sound arbitrary to people with different requirements.

In particular, your equating digital audio architecture with SPDIF and HDMI and rejecting other approaches is your own arbitrary choice. If those approaches are broken, then looking for alternatives seems a reasonable course of action.

> S/PDIF and HDMI in the way I explained them is used to carry digital audio 99.99999% of the time.

Another example of an arbitrary definition of the universe. If we are talking about digital audio architecture in general, the digital audio stream is often converted to analog before it leaves the source device. Some examples:

CD or DVD player connected via analog cables to an pre-amp, integrated amp or receiver.
iPod.iPhone/iPad with analog output to headphones.
PC running iTunes (or an equivalent) with analog output to desktop speakers.
PC running iTunes with analog output to a stereo system.
Sonos or Squeezebox using analog output to an amp or powered speakers. (and ethernet or wireless/TCP/IP to move digital data between devices.)

I'd say this adds up to far more than 0.00001%.

> You are taking the source in isolation where I showed the complete system end-to-end.
> The measurements showed exactly what I stated: alarming levels of jitter in that pairing.

I was careful to quote your remark dismissing the RME card before commenting on the test.

From that graph YOU drew the conclusion that one component in the system was unacceptable. You are ascribing behavior to me that you committed and I commented on.

If SPDIF is broken as a system, why are you drawing conclusions about one component when the other component is know to be a very poor performer with respect to rejecting jitter?

> You are trying to infer from source measurement that this can't be because
> that device is supposed to be much better.

Those inferences are your own. I quoted the Stereophile reviews. I added a single sentence suggested that your dismissal of the RME card was not warranted.

I see no point in continuing this exchange.

Bill
 
You have never stated your definition of the requirements for an architecture for digital audio.
Of course and repeatedly so. I mentioned pull as one method. And timestamped data transfer as another.

Your remarks suggest that it must be integrated with video architecture.
Yes, or you break consumer experience. If I have an AVR and decide to use an outboard DAC, the system shouldn't after a few seconds fall out of sync. That is not a consumer's expectation for an upgrade. The integration with video is *assumed.*

Of course, you can choose to use specialized systems for music that don't follow that rule. That is OK. But the architecture has been designed and built up for extension of audio with video. And one cannot violate those rules and still think that the original architecture is being maintained.

A good proof point is that no AVR has a button that unlocks its audio DAC clock from HDMI and let it freewheel. That can easily be done and jitter reduced as a result. But doing so makes the device non-compliant with the consumer experience as offered in our systems.

You also seem to feel that motherboard output is important (or maybe HDMI from a graphics card).
Never said that. Only mentioned that as examples that can aggravate the poor architecture we have.

Other people may not feel the need for the same set of restrictions that you do and may not have the same set of requirements. Until you state the context for your remarks, your pronouncements just sound arbitrary to people with different requirements.
Fine.

In particular, your equating digital audio architecture with SPDIF and HDMI and rejecting other approaches is your own arbitrary choice.
Again, not only did I not reject the other approaches, but also mentioned them as work-arounds.

If those approaches are broken, then looking for alternatives seems a reasonable course of action.
Those work-arounds do not solve the full problem because that is what they are, work-arounds. I cannot use a USB DAC while enjoying a Blu-ray concert or album. It lacks copy protection. And at any rate, audio/video sync will be lost. So we are stuck with HDMI and its broken push architecture. Please don't post again that all of this is about video. I am providing explanation that using asynchronous method creates issues because the systems for distribution of content are not designed that way.

In our showroom, we have a theater with HDMI distribution. Then we have a 2-channel system with USB asynch bridge and such. I can't merge those two systems due to above issues.

>S/PDIF and HDMI in the way I explained them is used to carry digital audio 99.99999% of the time.

Another example of an arbitrary definition of the universe. If we are talking about digital audio architecture in general, the digital audio stream is often converted to analog before it leaves the source device. Some examples:

CD or DVD player connected via analog cables to an pre-amp, integrated amp or receiver.
iPod.iPhone/iPad with analog output to headphones.
PC running iTunes (or an equivalent) with analog output to desktop speakers.
PC running iTunes with analog output to a stereo system.
Sonos or Squeezebox using analog output to an amp or powered speakers. (and ethernet or wireless/TCP/IP to move digital data between devices.)

I'd say this adds up to far more than 0.00001%.
All of those are fringe applications. The mass market is about using a player with HDMI directly to a TV or through an AVR. And in the case of pre-DVD days, using Toslink and S/PDIF. The sum total of Sonos and Squeezebox ever sold doesn't add up to the number of HDMI solutions sold in a day.

From that graph YOU drew the conclusion that one component in the system was unacceptable.
No. I drew the conclusion that our digital audio system can have high levels of distortion, unlike what you said, even though it is going through a Pro sound card out of the PC.

If SPDIF is broken as a system, why are you drawing conclusions about one component when the other component is know to be a very poor performer with respect to rejecting jitter?
I am drawing conclusions based on a long career in the space. Fact that I give you one example doesn't mean it is sum total of all data. In the interest of writing a post and not a book, I gave you one reference. But the book is also written :). Here it is: http://www.whatsbestforum.com/showthread.php?1151-Audible-Jitter-amirm-vs-Ethan-Winer

Take a look at the detailed analysis including references to digital audio authorities such as Julian Dunn.

I added a single sentence suggested that your dismissal of the RME card was not warranted.
Again, you are defending something that was never attacked. You brought in the topic of Pro cards and I ran with your example proving my point again about the *architecture* of digital audio.

I see no point in continuing this exchange.

Bill
I do but certainly can't make you post if you don't want to :). There is no reason to take these exchanges personally or so seriously. It is just technology and a hobby. I have built a lifetime and career working on digital audio. The notion that I criticize it doesn't come from any disdain for it, but in hoping to explain how it works and motivate us to move to better ways of doing things. I own no analog equipment at home and use digital systems exclusively. So if your angst on this is because you think I am trying to justify alternative forms of delivery, that is mistaken.

BTW, I did read your last post. But thought to not respond to all of it since you I could tell you were getting upset with me. That rewarded me with you saying I didn't read your post :(. So now I have responded to everything you said that I could comment on. Hopefully that doesn't get another rock thrown at me just the same :D.
 
I might just throw in some light relief now, and support Tim's point of view :)! And that is, that the level of jitter we're worrying about being present in the system is irrelevant to 99.9% of the music being listened to. Why, because the 100 to 150 psec of jitter is only going to cause audible distortion, if at all, to a full power 20kHz sine wave. And we all know that real music is full of that sort of content!! :D

No, the reality is that music has a waveform, a wriggling, which is nearly all the time highly tolerant of jitter, even nsecs of the stuff for the vast majority of the time will make absolutely no difference to the analogue output of the DAC, even when you work it out to the finest precision mathematically.

So, even for a really crappy recovered clock this jitter is of no importance to the DAC's performance. Which is why the engineers who designed the system weren't fussed about it, they knew even theoretically it was unimportant in the context of what was being dealt with: musical waveforms ...

Frank
 
Those work-arounds do not solve the full problem because that is what they are, work-arounds. I cannot use a USB DAC while enjoying a Blu-ray concert or album. It lacks copy protection. And at any rate, audio/video sync will be lost. So we are stuck with HDMI and its broken push architecture. <snip>

In our showroom, we have a theater with HDMI distribution. Then we have a 2-channel system with USB asynch bridge and such. I can't merge those two systems due to above issues.
This is the part that pisses me off. Putting aside the copy protection issue, I want a USB dac (need not be a standalone but instead can be inside the pre-pro) that can handle multi-channel music, e.g., DSD, DVDA, SACD, etc.
 
I might just throw in some light relief now, and support Tim's point of view :)!
I see a happy marriage coming! :D

And that is, that the level of jitter we're worrying about being present in the system is irrelevant to 99.9% of the music being listened to. Why, because the 100 to 150 psec of jitter is only going to cause audible distortion, if at all, to a full power 20kHz sine wave. And we all know that real music is full of that sort of content!! :D
We certainly don't need to worry about jitter if they are such low levels.

No, the reality is that music has a waveform, a wriggling, which is nearly all the time highly tolerant of jitter, even nsecs of the stuff for the vast majority of the time will make absolutely no difference to the analogue output of the DAC, even when you work it out to the finest precision mathematically.
Well that is not true. Here is 7 ns jitter applied to a single sine wave:

figure_1_Periodic_Jitter_10khz.jpg


So it definitely changes the analog output of the DAC as the above is a measurement not a simulation. Those sidebands are there and predicted to be that much based on the mathematics. That level of jitter btw is readily available from HDMI input of modern AVRs and routinely so.

So, even for a really crappy recovered clock this jitter is of no importance to the DAC's performance. Which is why the engineers who designed the system weren't fussed about it, they knew even theoretically it was unimportant in the context of what was being dealt with: musical waveforms ...

Frank
There is plenty of evidence that says the people who designed it, did not understand or appreciate jitter. Here is the famous jitter paper by Julian Dunn which came out some 15 years or so after introduction of the interface:

http://www.nanophon.com/audio/diagnose.pdf

"It is well known that severe timing jitter in AES3 digital audio interface can cause loss of lock or loss of data, and recently it has become understood that even small amounts of this interface jitter can affec the sound quality of analog-to-digital converters (ADCs) and digital-to-analog converters (DACS). The diagnosis and solutoin of these problems is still widely misunderstood amongst equipment desighers, with the result that system operators are often plagued by jitter problems."

BTW, this came from our technical library that this article and many others. http://www.whatsbestforum.com/showt...-Audio-signal-Processing-Papers-Presentations. Don's simulations and tutorials should be of special interest.
 
I see a happy marriage coming! :D
I'm sure everyone from WBF will come to the celebration! Everyone, perhaps, except Tim ... :)

Well that is not true. Here is 7 ns jitter applied to a single sine wave:

figure_1_Periodic_Jitter_10khz.jpg


So it definitely changes the analog output of the DAC as the above is a measurement not a simulation. Those sidebands are there and predicted to be that much based on the mathematics. That level of jitter btw is readily available from HDMI input of modern AVRs and routinely so.
But that's what I'm talking about! That's a full strength 10kHz sine wave, show me the piece of music, on CD or file, that contains that peak, once, let alone repeated a number of times! And note that the sidebands are down over 80dB, that's better than the best R2R performance, it WILL be inaudible. In other words, it is a theoretical misbehaviour, which essentially never occurs in real listening ...

Frank
 
I'm sure everyone from WBF will come to the celebration! Everyone, perhaps, except Tim ... :)

But that's what I'm talking about! That's a full strength 10kHz sine wave, show me the piece of music, on CD or file, that contains that peak, once, let alone repeated a number of times! And note that the sidebands are down over 80dB, that's better than the best R2R performance, it WILL be inaudible. In other words, it is a theoretical misbehaviour, which essentially never occurs in real listening ...

Frank
Unfortunately what you say is a dual edged swords. It is true that the amplitude of high frequencies is usually pretty low -- often way down 60 to 80 dbs down. But because it is that low, it can be impacted much easier. You say the frequencies don't repeat. They actually do! You get two distortion sidebands for every tone in your music because that is how many tones your music has. Let me repeat that: you get two of those sidebands for everyone tone in your music. So what you get is thousands of new tones added to your music. What you hear is the sum total of all of those plus what was in your music. Net result is increased high frequency contributions and music can start to sound a bit "bright."

That is not to say you hear this all the time or even often. But it is going to happen. You can't stop it because you can't control what tones exist in your music.

Further, the graph shows the most simplistic version of jitter where it is a clean sine wave. There is no rule at all that your jitter has that spectrum. Take a USB bus that is sending 1K bytes at a time, at end of which the receiver gets an interrupt to process that data. If that CPU activity causes a glitch on the power supply and hence jitter, you will now have a pulse train for your jitter, full of harmonics. So now even a single tone in your music has infinite number of jitter sidebands.

Here is an actual measurement from a TI BurrBrown USB DAC which had jitter due to USB data framing before the designer caught it and fixed the problem:

TI_DAC_FIG9.gif


Notice the frequency of the music tone: 1 KHz. Notice all of those jitter sidebands, reaching up as high as -58 db. This is the text that goes with that story:

"“At such a time it is human nature to want various people to see (hear) the result, so we demonstrated it to all of those purported to be 'Golden Ears.' The audio signal came through the PCM1716, a DAC with an industry-wide reputation, and the PLL as the PLL1700, which has excellent C/N performance.”

“When the guys in charge listened to the prototype I saw dubious faces and was asked a variety of questions such as "Is the source coming from the PC corrupted?" In the end I was told to measure the audio performance. When I announced the results in a subsequent meeting I was told the distortion was an order of magnitude too high; the THD+N was 0.03%.

I went into this thinking "Since we are processing digital signals, we can expect good sound as a matter of course, and from here on we are dealing with digital!" So this experience was a real shock.

Upon Raising the FFT Resolution . . . A 100 Hz Monster Appeared!
Next, in order to investigate the skirt around the fundamental, I decided to increase the FFT resolution to a higher setting than I usually use. Naturally it took longer to make the measurement. After a wait time that would best be measured in a fractions of an hour, I was amazed at the FFT analyzer's output graph. The measured FFT is shown in Figure 9. [above]"

Even for a sample rate of 44.1 kHz, the USB isochronous mode packets have a period of 1 ms (1 kHz). In order to distribute 44.1 kHz across 1 ms intervals, one 45-sample packet is sent for every nine 44-sample packets. The tracking pulse (as we will call it here) for every 45 sample packet occurs once every 10 packets, or with a frequency of 100 Hz. Since the PLL loop filter, a so-called low pass filter, has its corner in the tens of kHz range, this 100 Hz tracking pulse goes right on through and shows up on the PLL's VCO control voltage. It appears as frequency jitter.

From the graph it is seen that the PLL frequency fluctuates impulsively right at 10 ms intervals. As a test I changed the sampling frequency to 48 kHz and measured the same 1 kHz signal.”


And keep in mind that your music doesn't always mask the tone. If the frequencies are far apart (i.e. distortion vs main tone), then masking doesn't work and you will hear the contributions. Here is an actual demonstration of jitter distortion where *both* the jitter and music tone are ultrasonic and hence inaudible: http://www.whatsbestforum.com/showthread.php?3808-The-sound-of-Jitter. I bet you hear the distortion even though the jitter levels are very low (-80db).
 
. Here is an actual demonstration of jitter distortion where *both* the jitter and music tone are ultrasonic and hence inaudible: http://www.whatsbestforum.com/showthread.php?3808-The-sound-of-Jitter. I bet you hear the distortion even though the jitter levels are very low (-80db).
Just going to your last point first: that's not jitter, that's good old beat frequencies, a perfectly correct audio phenomenon. Take a (real) piano, detune one of the strings of a note by a significant amount, and hit the key. You get that "woooah ... woooah ... wooah" sound as the two clashing frequencies beat with each other. Sorry, nothing to do with jitter ...

You say the frequencies don't repeat. They actually do! You get two distortion sidebands for every tone in your music because that is how many tones your music has. Let me repeat that: you get two of those sidebands for everyone tone in your music. So what you get is thousands of new tones added to your music. What you hear is the sum total of all of those plus what was in your music. Net result is increased high frequency contributions and music can start to sound a bit "bright."
What I said didn't repeat where the high amplitude, high frequency real music signals: you may get such peaks in the midrange, but not in the treble, unless very bizarre (or compressed!) music. You may get "lots" of extra tones but they way down in the noise floor, and they don't arithmetically add, there's lots of cancellation too.

That is not to say you hear this all the time or even often. But it is going to happen. You can't stop it because you can't control what tones exist in your music.
No, but 99.99% of music will never have these "severe" tones.

Further, the graph shows the most simplistic version of jitter where it is a clean sine wave. There is no rule at all that your jitter has that spectrum. Take a USB bus that is sending 1K bytes at a time, at end of which the receiver gets an interrupt to process that data. If that CPU activity causes a glitch on the power supply and hence jitter, you will now have a pulse train for your jitter, full of harmonics. So now even a single tone in your music has infinite number of jitter sidebands.
That's what the receiver and PLL circuitry is all about, it reduces the jitter, no matter by what cause, to within certain limits, assuming GOOD engineering. Enough so jitter generated distortion is 99.99% of the time inaudible

Here is an actual measurement from a TI BurrBrown USB DAC which had jitter due to USB data framing before the designer caught it and fixed the problem:
Yes, it was a defective design and part, which had to be fixed. Then, problem solved!

And keep in mind that your music doesn't always mask the tone. If the frequencies are far apart (i.e. distortion vs main tone), then masking doesn't work and you will hear the contributions
If it's low in amplitude, by better than 60dB then it will. I have a sample track, and same again at 60dB down. The latter I struggle to hear running at full volume with my ear against the speaker driver: once you get close to 80dB down, it's totally impossible! Also, keep in mind that an audiophile priced cartridge produces vastly more distortion than this trying to reproduce a full volume 10kHz sine wave, if someone could master it: no-one is complaining about the sound of these devices ...

Frank
 
Last edited:
Just going to your last point first: that's not jitter, that's good old beat frequencies, a perfectly correct audio phenomenon. Take a (real) piano, detune one of the strings of a note by a significant amount, and hit the key. You get that "woooah ... woooah ... wooah" sound as the two clashing frequencies beat with each other. Sorry, nothing to do with jitter ...
Of course it is jitter. I created the tones using FM Modulation which is what jitter does: it modulates the clock. Of course, that is how FM Synthesis also works for creating musical tones. But just because you can do that, it doesn't mean it is not jitter. Since you are convinced that you can hear such modulations, then maybe you will doubt less that you will also hear jitter ;).

What I said didn't repeat where the high amplitude, high frequency real music signals: you may get such peaks in the midrange, but not in the treble, unless very bizarre (or compressed!) music.
I addressed that point. Indeed, I took what you said as the assumption and showed you that the low level of high frequency spectrum actually makes it susceptible to increase in power due to injection of jitter sidebands in the same area.

You may get "lots" of extra tones but they way down in the noise floor, and they don't arithmetically add, there's lots of cancellation too.
High frequency noise can accentuate the high frequencies in the music. In photography, noise is sometimes added to increase the appearance of sharpness. You can't add a bunch of tones to your music and just hope and pray it all cancels out.

That's what the receiver and PLL circuitry is all about, it reduces the jitter, no matter by what cause, to within certain limits, assuming GOOD engineering. Enough so jitter generated distortion is 99.99% of the time inaudible
Again, read my quote from your wolfson spec. Read the part where it says common PLLs have a corner frequency of 20 KHz. Not 20 HZ. But 20 Khz. It absolutely is possible to build very low jitter digital audio interfaces. And that is what well engineered system do, despite the challenge that is presented to them. That is not the point of this thread. But rather, the fact that if the interface was designed differently we would not need all this extra work to get there.

Yes, it was a defective design and part, which had to be fixed. Then, problem solved!
Nope. The man is a PhD in electrical engineering. He designed a chip that thought worked perfectly. His distortion was 0.03%. I don't call that "defective." It was the "golden ears" who first told him it didn't sound right. He then analyzed the design and discovered that digital wasn't digital after all. Had it not been because of the listening tests, that product would have made it to the market.

If it's low in amplitude, by better than 60dB then it will. I have a sample track, and same again at 60dB down. The latter I struggle to hear running at full volume with my ear against the speaker driver: once you get close to 80dB down, it's totally impossible! Also, keep in mind that an audiophile priced cartridge produces vastly more distortion than this trying to reproduce a full volume 10kHz sine wave, if someone could master it: no-one is complaining about the sound of these devices ...

Frank
Frank, why do you keep talking about analog tapes and now LP? What does that have to do with digital? Digital doesn't become broken or great because analog world did this or that.

My goal with digital is to do what it is advertised to do. If it says 16 bits, then I darn well expect the device to do 16 bits. I don't care if I don't hear past 15 bits. As you said, good engineering exists to do things right. And I expect them to get it there. The point of this thread is that their job is hard, and as consumers you need to pay attention to them pulling this off. If you want to argue for a system that nets out 80 db, be my guest. It isn't for me. I want us to at least achieve what we advertise the CD can do.

Here is the important bit. if the distortion is below the least significant bit of 16 bits, then I don't have to worry about what is audible and what is not. Because by definition, my system noise is higher than my distortion. Once you get above -96db noise/distortion, then you get in the middle of the mud you are trying to dance in :), which is what is good enough. I want us to achieve what both sides of the debate can agree is inaudible, not what someone can try to justify as being inaudible. If we couldn't achieve that metric, then sure, we could talk about it. But we can. Despite the broken architecture :D. If we stay away from HDMI for now....
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu