Is digital audio intrinsically "broken"?

Of course it is jitter. I created the tones using FM Modulation which is what jitter does: it modulates the clock. Of course, that is how FM Synthesis also works for creating musical tones. But just because you can do that, it doesn't mean it is not jitter. Since you are convinced that you can hear such modulations, then maybe you will doubt less that you will also hear jitter ;).
I had a look at the lowest amplitude sample in Audacity, the jitter-80-20.wav, and you can see easily what it is, a 24kHz or so tone that cycles in amplitude between silence and -20dB in volume, in other words there is a beat frequency imposed on a still high amplitude ultrasonic tone, which is stressing the PC DAC, and it's spitting out out distortion -- in my world this has nothing to do with jitter ...

I addressed that point. Indeed, I took what you said as the assumption and showed you that the low level of high frequency spectrum actually makes it susceptible to increase in power due to injection of jitter sidebands in the same area.
That doesn't make sense to me ...

Nope. The man is a PhD in electrical engineering. He designed a chip that thought worked perfectly. His distortion was 0.03%. I don't call that "defective." It was the "golden ears" who first told him it didn't sound right. He then analyzed the design and discovered that digital wasn't digital after all. Had it not been because of the listening tests, that product would have made it to the market.
His original distortion test was insufficient, once the right measuring tool was applied he could see it was defective. Sort of like how a lot of audio engineering is done perhaps, a quick round of the obvious tests, none of the tricky ones, takes too much effort; okay, product must be OK, put it out there ..

My goal with digital is to do what it is advertised to do. If it says 16 bits, then I darn well expect the device to do 16 bits. I don't care if I don't hear past 15 bits. As you said, good engineering exists to do things right. And I expect them to get it there. The point of this thread is that their job is hard, and as consumers you need to pay attention to them pulling this off. If you want to argue for a system that nets out 80 db, be my guest. It isn't for me. I want us to at least achieve what we advertise the CD can do.

Here is the important bit. if the distortion is below the least significant bit of 16 bits, then I don't have to worry about what is audible and what is not. Because by definition, my system noise is higher than my distortion. Once you get above -96db noise/distortion, then you get in the middle of the mud you are trying to dance in :), which is what is good enough. I want us to achieve what both sides of the debate can agree is inaudible, not what someone can try to justify as being inaudible. If we couldn't achieve that metric, then sure, we could talk about it. But we can. Despite the broken architecture :D. If we stay away from HDMI for now....
You're expecting miracles if you believe that one can continuously get better than -96db noise/distortion, this is pure specmanship now. Like a normal car advertised to be able to do 120mph, and expecting it to do that continously and without fuss; yes, in exactly the right circumstances it'll get there for a short of time, to prove a point, but for what purpose?

The "middle of the mud" is a lot, lot higher than -96dB, otherwise R2R would sound pretty dreadful. 96dB rolls off the tongue easily, but it means something is accurate to close enough to 1 part in 100,000, which in electrical engineering terms is really pushing the envelope, and people's ears are a lot more tolerant than that. Sorry, this is really falling in the camp of those notorious Japanese amps with 0.001% distortion which sounded awful: focusing on getting one thing right at the expense of getting the overall balance of engineering right will not provide a winning formula.

FRank
 
I had a look at the lowest amplitude sample in Audacity, the jitter-80-20.wav, and you can see easily what it is, a 24kHz or so tone that cycles in amplitude between silence and -20dB in volume, in other words there is a beat frequency imposed on a still high amplitude ultrasonic tone, which is stressing the PC DAC, and it's spitting out out distortion -- in my world this has nothing to do with jitter ...
Stressing the DAC? Is that a technical term? Walk me through what stressing the DAC means.

That doesn't make sense to me ...
That settles it then :).

His original distortion test was insufficient, once the right measuring tool was applied he could see it was defective. Sort of like how a lot of audio engineering is done perhaps, a quick round of the obvious tests, none of the tricky ones, takes too much effort; okay, product must be OK, put it out there ..
Doesn't look like you followed the cause and effect. Here it is again:

"Even for a sample rate of 44.1 kHz, the USB isochronous mode packets have a period of 1 ms (1 kHz). In order to distribute 44.1 kHz across 1 ms intervals, one 45-sample packet is sent for every nine 44-sample packets. The tracking pulse (as we will call it here) for every 45 sample packet occurs once every 10 packets, or with a frequency of 100 Hz. Since the PLL loop filter, a so-called low pass filter, has its corner in the tens of kHz range, this 100 Hz tracking pulse goes right on through and shows up on the PLL's VCO control voltage. It appears as frequency jitter."

I bolded the key conclusion. Note the fact that the PLL was not sufficient in filtering out the 1 KHz jitter. You said PLL fixes problems. How come it didn't here?

This was a fully functioning design. It captured digital data and played them from USB. Yet, performance was not there due this time, synchronous USB jitter bleeding into the DAC.

You're expecting miracles if you believe that one can continuously get better than -96db noise/distortion, this is pure specmanship now.
First of all, I didn't say "better." I said that good. Plenty of products achieve that. But if you want to tell me digital is so broken that it can't do that, who am I to argue with that? :D

Like a normal car advertised to be able to do 120mph, and expecting it to do that continously and without fuss; yes, in exactly the right circumstances it'll get there for a short of time, to prove a point, but for what purpose?
I explained the purpose. Can you read it and come back an argue it instead of asking me to repeat it?

The "middle of the mud" is a lot, lot higher than -96dB, otherwise R2R would sound pretty dreadful.
Back to tape again? You are not able to discuss this topic without reference to analog?

96dB rolls off the tongue easily, but it means something is accurate to close enough to 1 part in 100,000, which in electrical engineering terms is really pushing the envelope, and people's ears are a lot more tolerant than that.
Maybe so, maybe not. Fact is that you can't prove what you just said. Whereas I can prove what I said with mathematics.

Sorry, this is really falling in the camp of those notorious Japanese amps with 0.001% distortion which sounded awful: focusing on getting one thing right at the expense of getting the overall balance of engineering right will not provide a winning formula.

FRank
So what are you saying? That additional DAC accuracy comes at the expense of something else? What is that? I thought you were telling me that it is easy to have low jitter. Now you are saying if I have low jitter something else goes wrong? What is that?

So that we don't go round and round, if you get jitter below 500 ps peak to peak, your jitter distortion product is less than one bit. Show me how there is a trade off in achieving that.
 
Frank,
did you take a look at the Tektronix primer I linked?
One aspect it did not cover about frequency jitter is that it is not linear, and can be much worse at one end of the frequency range.
As a side note, it is interesting that distortion of many audio dacs is around 0.2% to 0.4% at -60dbfs in measurements I have seen.

Anyway staying out of this discussion :)
Cheers
Orb
 
Stressing the DAC? Is that a technical term? Walk me through what stressing the DAC means.
Okay, for some reason I was thinking you had mixed a 24kHz and 21kHz signal, which is not the case of course; as you say, it was FM, encoding a 3kHz signal, which I was hearing. It wasn't particularly clean on my PC, so knowing the DACs in these are very low cost the I would say the ultrasonic carrier components were causing high levels of distortion. But emulating the principle of jitter in such a gross way doesn't mean that the real world, random jitter causes audible distortion.

Doesn't look like you followed the cause and effect. Here it is again:

"Even for a sample rate of 44.1 kHz, the USB isochronous mode packets have a period of 1 ms (1 kHz). In order to distribute 44.1 kHz across 1 ms intervals, one 45-sample packet is sent for every nine 44-sample packets. The tracking pulse (as we will call it here) for every 45 sample packet occurs once every 10 packets, or with a frequency of 100 Hz. Since the PLL loop filter, a so-called low pass filter, has its corner in the tens of kHz range, this 100 Hz tracking pulse goes right on through and shows up on the PLL's VCO control voltage. It appears as frequency jitter."
Sorry, same answer, the product had a defect, he didn't measure accurately enough first time round, he had to refine the measuring procedure and then he could see he had a problem, that needed to be fixed!

I bolded the key conclusion. Note the fact that the PLL was not sufficient in filtering out the 1 KHz jitter. You said PLL fixes problems. How come it didn't here?
Because it was insufficiently engineered in a key area. Happens all the time in engineering: that's why bridges collapse, and planes fall out of the sky at times. Full understanding of all the issues was not on the table, irrespective of the person's qualifications, etc.

First of all, I didn't say "better." I said that good. Plenty of products achieve that. But if you want to tell me digital is so broken that it can't do that, who am I to argue with that? :D
Yes, digital typically doesn't do that well, that's why it doesn't sound right to a lot of people. If you get a digital component and apply a very rigid test procedure to it you will get those nice figures that are quoted. But I am 100% certain that the real life, dynamic distortion of digital componentry misses those marks all the time, because the digital elements are not isolated from the rest of the world, the other components of the system. Jitter is the great culprit in this at the moment, and I'm willing to say it participates in the addition of distortion, if you could look at the precise signal at the chip signal that is doing the conversion. But that apparent jitter is caused by glitching in the circuitry in all sorts of ways, and having a "better" architecture WILL NOT solve that. But better engineering with what we have will ...


I explained the purpose. Can you read it and come back an argue it instead of asking me to repeat it?
You say that better than 96dB means inaudible. Virtually all digital gear can be coaxed to measure that well, yet a lot of people reckon that digital stinks. So what's going on?

Back to tape again? You are not able to discuss this topic without reference to analog?
Yes, I will, because many people hold R2R up as a shining example of what works, as compared to digital falling somewhat flat on its face. Tape has terrible levels of distortion compared to digital, so are all these people's ears defective?

So what are you saying? That additional DAC accuracy comes at the expense of something else? What is that? I thought you were telling me that it is easy to have low jitter. Now you are saying if I have low jitter something else goes wrong? What is that?

So that we don't go round and round, if you get jitter below 500 ps peak to peak, your jitter distortion product is less than one bit. Show me how there is a trade off in achieving that.
What I am saying is that JUST focusing on jitter, and the architecture of the interface as being the determining factor controlling the level of it is doomed to failure. With the best architecture in the world there will still be lousy digital sound because engineers and companies will be sloppy, thinking that the "good" architecture solves it all. And on the other hand, a non-optimal architecture can be worked with by good engineering to get as good an audible result as you want.

Too tired for maths games; show me a piece of real music waveform, where a single, random jitter of 1 nsec will have an audible effect ... :):)

Frank
 
Frank why random jitter and not signal related jitter?
In general random jitter is similar to white noise, but then the Tektronix primer shows realworld examples of jitter, and if follow reviews-measurements from both Paul Miller and John Atkinson (both with credible scientific research backgrounds that does help in such subjects) some audible traits can be defined to certain jitter patterns.

Cheers
Orb
 
Okay, for some reason I was thinking you had mixed a 24kHz and 21kHz signal, which is not the case of course; as you say, it was FM, encoding a 3kHz signal, which I was hearing.
Nope. You got it backward. Have you studied the principals of jitter and FM modulation? Please go and study the papers and links that I showed and then we will continue.

You say that better than 96dB means inaudible. Virtually all digital gear can be coaxed to measure that well, yet a lot of people reckon that digital stinks.
Not if jitter is above the level I mentioned.

So what's going on?
That is for another topic and discussion. This one is about the *architecture* of digital. We can discuss this topic in concrete terms because we know what the system does.

Yes, I will, because many people hold R2R up as a shining example of what works, as compared to digital falling somewhat flat on its face. Tape has terrible levels of distortion compared to digital, so are all these people's ears defective?
Was I one of those people in this thread? Nope. I can make my case without referring to analog. Someone can create a thread about tape and discuss its architecture there.

What I am saying is that JUST focusing on jitter, and the architecture of the interface as being the determining factor controlling the level of it is doomed to failure.
Looks like the plot is lost :). This thread is not about all things digital. It is about a specific topic: is the architecture right? I made a case that the architecture is not right because it puts the source in charge of timing and that makes the design much harder to get right than if the target was in charge.

There is nothing in this discussion about superiority of digital over analog, whether all aspects of digital work right, DAC design, etc. So the focus is not per-se "jitter" but that being the manifestation of the architectural deficiency in the manner I explained.

With the best architecture in the world there will still be lousy digital sound because engineers and companies will be sloppy, thinking that the "good" architecture solves it all.
Give me a good architecture and then I deal with the rest. Give me a bad architecture and I still have to do all that but now you have made my work a lot harder because I also have to solve the architectural problems as evidenced by the $2 transceiver you linked to in this day and age. I gave the example of USB hard disk. The architecture is right there as timing doesn't impact the fidelity of bits written on your disk drive. It certainly doesn't require a $2 part to get rid of jitter alone.

And on the other hand, a non-optimal architecture can be worked with by good engineering to get as good an audible result as you want.
They can. But the purpose of this thread is to understand and determine if the architecture is optimal or not. If we agree it is not, then job is done and we can finish the thread.

Too tired for maths games; show me a piece of real music waveform, where a single, random jitter of 1 nsec will have an audible effect ... :):)

Frank
As I said, my job is to do that with math. So if you can't go there, then we are done. Your job on the other hand, is to demonstrate inaudibility in the face of jitter. So the assignment is yours. Let us know when you have music samples with jitter induced in them using the profiles that exists in our audio equipment.
 
Amir,
Can we use hardware random generated jitter in a "dither-like way"? As far as I understand, one of the key issues of jitter effects is its spectral distribution. Did any one study what happens if jitter is random, not related with the audio content?
 
Amir,
Can we use hardware random generated jitter in a "dither-like way"? As far as I understand, one of the key issues of jitter effects is its spectral distribution. Did any one study what happens if jitter is random, not related with the audio content?
You mean can random jitter act like dither and hence have some good properties? If so, I would think so. The levels have to be right as in dither to make sure it doesn't do too much or too little.
 
As I said, my job is to do that with math. So if you can't go there, then we are done. Your job on the other hand, is to demonstrate inaudibility in the face of jitter. So the assignment is yours. Let us know when you have music samples with jitter induced in them using the profiles that exists in our audio equipment.
If we drop back to purely evaluating the architecture as a design concept, then the case is very easy to argue. The target could never have been been allowed to the source of timing as a principle, because the architects always had to look into the future, as to where digital as a waveform transport medium would go. And one of the key aspects that would have been considered, as in any case where timing is crucial, is what happens when one source goes to multiple targets at the one time. This is instantly a nightmare, if we suscribe to your vision, because one of the targets now has to become the master clock. And how are we going to decide which one it's going to be? And all the other targets still have to make do, just like they do now ...

Regarding the "real" impact of jitter, it's very easy to see in Audacity how minute an impact it has: get your most vigorous piece of music, lovely sharp peaks just hitting the bump stops, and zoom right in on the time axis. Keep zooming until the program won't go further. The scale on the time axis now has markers at every 5 micro secs, 1,000 times greater than 5 nsecs. So 1 nsec on the screen is now impossibly small, moving the mouse 1 pixel left and right is roughly equivalent to 100nsecs! Now if I were to grab a waveform data point there and move it 1 pixel to the left or right, 100nsecs, mind you, you can see it has zero impact on the shape of the music signal at that point!

That simple experiment makes it visually easy to see that jitter is inaudible, it's merely as a mathematical construct that it has meaning ...

Frank
 
If we drop back to purely evaluating the architecture as a design concept, then the case is very easy to argue. The target could never have been been allowed to the source of timing as a principle, because the architects always had to look into the future, as to where digital as a waveform transport medium would go. And one of the key aspects that would have been considered, as in any case where timing is crucial, is what happens when one source goes to multiple targets at the one time.
What is multiple targets? In an AVR? Then there can be one master clock driving all the DACs. If it is in separate devices, then a house clock is used as in pro equipment. Or else, you don't care like playing music in different rooms.

This is instantly a nightmare, if we suscribe to your vision, because one of the targets now has to become the master clock. And how are we going to decide which one it's going to be? And all the other targets still have to make do, just like they do now ...
I explained above. As another example, I can have 1000 DAC playing the same identical file on a NAS shared over Ethernet. Nothing breaks. No complexity either.

Regarding the "real" impact of jitter, it's very easy to see in Audacity how minute an impact it has: get your most vigorous piece of music, lovely sharp peaks just hitting the bump stops, and zoom right in on the time axis. Keep zooming until the program won't go further. The scale on the time axis now has markers at every 5 micro secs, 1,000 times greater than 5 nsecs. So 1 nsec on the screen is now impossibly small, moving the mouse 1 pixel left and right is roughly equivalent to 100nsecs! Now if I were to grab a waveform data point there and move it 1 pixel to the left or right, 100nsecs, mind you, you can see it has zero impact on the shape of the music signal at that point!
You are a bigger man than all of us if you can visually see non-linearities in time domain that way! I think I will sell my Audio Precision Analyzer and adopt your method. Just pulling your leg. :D

That simple experiment makes it visually easy to see that jitter is inaudible, it's merely as a mathematical construct that it has meaning ...

Frank
What else can you do with that method Frank? Can you see amplifier THD?
 
Amir,
Can we use hardware random generated jitter in a "dither-like way"? As far as I understand, one of the key issues of jitter effects is its spectral distribution. Did any one study what happens if jitter is random, not related with the audio content?
Heya Micro.
A basic visual for distribution pattern of jitter effects and to the type of jitter are shown in section 4.2 of the Tektronix primer I linked, as I mentioned this do reflect real world experience-measurements of reviewed products.
I appreciate though as a primer it does not go into extensive detail for each, but I can try to find some of the papers I have that go into detail similar to Julian Dunn with his squarewave-1/4wave test model paper (Jitter Transfer Function Measurement and Data Jitter Test Signal), if anyone is intersted.
Cheers
Orb
 
Okay, Orb, ya got me! I had a look at the link, nice run down; but straightaway notice 2 things: doesn't mention audio, and all measurements except a couple in an example always refer to nsecs, something a 1,000 times greater than psecs. All those pictures of misbehaving waveforms are a million miles away from how an audio waveform will misbehave in the flesh, no oscilloscope will ever capture a bit of audio jitter, it will be microscopically small compared to the audio waveform.

Frank
 
Okay, Orb, ya got me! I had a look at the link, nice run down; but straightaway notice 2 things: doesn't mention audio, and all measurements except a couple in an example always refer to nsecs, something a 1,000 times greater than psecs. All those pictures of misbehaving waveforms are a million miles away from how an audio waveform will misbehave in the flesh, no oscilloscope will ever capture a bit of audio jitter, it will be microscopically small compared to the audio waveform.

Frank
The audio waveform is not analog on the S/PDIF. It is digital. Therefore, its bandwidth is quite high. Further, we care about zero-crossing which can easily move with minute changes in time. From AES recommendations document on digital audio:

1232991408_5Bt5A-XL.png
 
Yes, the S/PDIF is digital, but it's allowed to have jitter, it's expected to have jitter, lots of it!! That's the whole point of the receiver circuit in the target, the PLL, etc: to remove that jitter before it reaches the critical circuitry. To quote, "The AES/EBU specification for S/PDIF allow for up to 80 nS jitter before bits are lost": the only job of the S/PDIF data stream is to get the bits across to the target, and to serve as a guidance mechanism for the precision clock in the receiver to synchronise to, it is not meant to be a precise clock in itself!

Yes, if the S/PDIF is extremely precise it makes the job of the receiving circuit easier, but it was never specified to have to be that accurate because it would make the engineering job highly difficult: translated, very expensive. No, it's up to the receiving end to do the job properly -- a car can't expect a road to be perfectly smooth, it must be engineered to handle the bumps properly, and if it doesn't, it's a lesser car ...

Frank
 
Yes, the S/PDIF is digital, but it's allowed to have jitter, it's expected to have jitter, lots of it!! That's the whole point of the receiver circuit in the target, the PLL, etc: to remove that jitter before it reaches the critical circuitry. To quote, "The AES/EBU specification for S/PDIF allow for up to 80 nS jitter before bits are lost": the only job of the S/PDIF data stream is to get the bits across to the target, and to serve as a guidance mechanism for the precision clock in the receiver to synchronise to, it is not meant to be a precise clock in itself!
That spec is only for data recovery, NOT for data fidelity. I already provided a lot of references for that. Indeed, the mistake you are making in interpreting the requirement there, is what led to that Jullin Dunn AES paper in 1994 stating why that is so wrong. I also showed you from your own suggested woflson transceiver:

""The WM8804/5 excels in meeting and exceeding these performance metrics. Notably, the
intrinsic jitter of the WM8805 is measured at 50ps, and the jitter rejection frequency of the
onboard PLL is 100Hz. This can be directly compared with competitive S/PDIF solutions
available today, with intrinsic jitter in the region of 150ps, and jitter rejection frequency greater
than 20kHz.
"

If so much jitter is allowed, why are they talking picoseconds? If PLLs always get rid of that, why do they say they usually don't?

Yes, if the S/PDIF is extremely precise it makes the job of the receiving circuit easier, but it was never specified to have to be that accurate because it would make the engineering job highly difficult: translated, very expensive. No, it's up to the receiving end to the job properly -- a car can't expect a road to be perfectly smooth, it must be engineered to handle the bumps properly, and if it doesn't, it's a lesser car ...

Frank
Again, you are confusing the requirement for data recovery which has never been an issue with reproduction fidelity.
 
You mean can random jitter act like dither and hence have some good properties? If so, I would think so. The levels have to be right as in dither to make sure it doesn't do too much or too little.

There are pix in one of my jitter threads in the technical subforum...

On "digital" jitter, the problem is that the (clock recovered from the) digital bit stream is usually used for the DAC's clock, and that's where it gets added to the audio signal.
 
That spec is only for data recovery, NOT for data fidelity. I already provided a lot of references for that. Indeed, the mistake you are making in interpreting the requirement there, is what led to that Jullin Dunn AES paper in 1994 stating why that is so wrong. I also showed you from your own suggested woflson transceiver:

""The WM8804/5 excels in meeting and exceeding these performance metrics. Notably, the
intrinsic jitter of the WM8805 is measured at 50ps, and the jitter rejection frequency of the
onboard PLL is 100Hz. This can be directly compared with competitive S/PDIF solutions
available today, with intrinsic jitter in the region of 150ps, and jitter rejection frequency greater
than 20kHz.
"

If so much jitter is allowed, why are they talking picoseconds? If PLLs always get rid of that, why do they say they usually don't?
Sorry, you're confusing the behaviour of the the S/PDIF stream, allowed to have 80 nsecs jitter, with the devices that then recover, or generate a clock from that stream -- this is the precision clock I mentioned above -- which of course should have as low a jitter as possible, 50 psecs in this case. The competitive S/PDIF solutions are again the devices that create a CLEAN clock, that are less precise than Wolfson's, with jitter of 150psec, from the DIRTY S/PDIF stream ...

Frank
 
Sorry, you're confusing the behaviour of the the S/PDIF stream, allowed to have 80 nsecs jitter, with the devices that then recover, or generate a clock from that stream -- this is the precision clock I mentioned above -- which of course should have as low a jitter as possible, 50 psecs in this case. The competitive S/PDIF solutions are again the devices that create a CLEAN clock, that are less precise than Wolfson's, with jitter of 150psec, from the DIRTY S/PDIF stream ...

Frank
I am not confusing anything. You brought in the notion of 80 ns into the discussion and I showed you that is neither here, nor there as we are not talking about data recovery. Now you are repeating the same but saying I am confused? The quote clearly says competing devices do NOT filter jitter below 20 KHz. So there is no statement there that says competing solutions generate "CLEAN" clocks.

Tell me again how you are getting clean clock to drive the DAC given that statement from the Wolfson.
 
Don't worry, folks, we'll nail this in the end! PLL technology is not trivial, so I will bring in a few more bits on board, to try and make this easier to get a handle on ...

First of all, clarify different elements of jitter. So, from http://www.altera.com/support/devices/pll_clock/jitter/pll-jitter.html:

Jitter Specifications

The performance of the PLL is measured using several parameters. Three of the common specifications used to characterize the PLL are jitter generation, tolerance, and transfer.
Jitter Generation

Jitter generation is the measure of the intrinsic jitter produced by the PLL and is measured at its output. Jitter generation is measured by applying a reference signal with no jitter to the input of the PLL, and measuring its output jitter. Jitter generation is usually specified as a peak-to-peak period jitter value.
Jitter Tolerance

Jitter tolerance is a measure of the ability of a PLL to operate properly (i.e., remain in lock in the presence of jitter of various magnitudes at different frequencies) when jitter is applied to its reference. Jitter tolerance is usually specified using an input jitter mask.
Jitter Transfer

Jitter transfer or jitter attenuation refers to the magnitude of jitter at the output of a device for a given amount of jitter at the input of the device. Input jitter is applied at various amplitudes and frequencies, and output jitter is measured with various bandwidth settings. Since intrinsic jitter is always present, jitter attenuation will appear to be lower for low frequency input jitter signals than for high frequency ones. Jitter transfer is typically specified using a bandwidth plot.
So the 80nsecs mentioned before was a jitter tolerance, and the 50psecs and 150psecs were jitter generation. But the really important one is jitter transfer, where the 100Hz vs. 20kHz figures come in, which apparently the DIR9001 is not so good at, WM8804 is claimed to be substantially better at, but the specs are not clear at all on this key measurement.

So, what to do? Well, as an option bring a jitter cleaning chip to the rescue, which does nothing else but this function. At least one company does them, Silicon Labs: takes in a dirty clock and spits out a clean one, and claims sub-ps jitter as well -- how about 300fsec, or 0.3psec jitter? And selectable loop bandwidth as well, the Si5317 does 60Hz - 8.4kHz, and washes the dishes too ...

So we just, just might get a clean clock after all, with good ol' PLL ...

Frank
 
Frank, if you are going to feed us Google snippets without understanding and reading them I won't be wasting time with you. I have no interest in answering tidbits produced by Google.

The Si5317 is a $9 part *without* the external parts it needs. It is in no way designed to be used for audio applications. This is what Si says about the part: "Highly Integrated Si5317 Jitter Attenuating Clock Filters Unwanted Noise from High-Speed Networking and Telecommunications Systems". The app notes are about SONET, not S/PDIF. So clearly you don't appreciate what these parts do and the nature of the problem.

No one has said that you can't clean up jitter. You can. But it costs you complexity and part cost. The entire point of the thread has been that we would not be worrying about jitter nearly as much if it didn't rely on source input so much. To the extent you keep throwing these expensive parts out there as solutions, then you have helped prove the case.

I highly suggest you read Don's articles instead of Googling jitter.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu