I hope you are not getting tired of jitter talk because I have more info to share .
One of the common arguments made against jitter mattering is that: "the data is buffered and clock regenerated in the DAC so jitter won't be there." This makes all the sense in the world. Once we capture the data and then push it out at our will, there shouldn't be a problem. Well, there is a problem. A serious one. Buffering and clock regeneration do not deal with jitter by themselves. I have explained this in words many times but this time I am bringing in some specific data to hopefully put this myth to bed (yeh, wishful thinking ).
Introduction
The way a clock is "regenerated" is to have a local oscillator (clock) that we can change its frequency to eventually match and track the incoming digital stream. As you may know, S/PDIF is a serial digital connection with clock and data intermixed. By using this circuit which is called a Phase Locked Loop (or PLL for short), we are able to extract a clock that is cleaner than the incoming one. This clean up allows us to capture the digital samples reliably.
Our job is not done however. We not only need to extract the data samples but also accurately match the incoming data rate (clock regeneration) without any timing variations in it getting to our DAC. Not doing so causes jitter to appear in the output of the DAC.
S/PDIF Receiver Clock Regeneration
Let's look at a Crystal Semiconductor CS 8416. From its data sheet: http://www.cirrus.com/en/pubs/proDatasheet/CS8416_F3.pdf, we see this measurement of its rejection of incoming jitter as it regenerates the clock using its PLL:
I have annotated their graph as to make it easier to understand.
The measurement shows how much of the incoming filter is reduced in amplitude (vertical scale) at each frequency of jitter (horizontal axis).
Note as I have indicated on the graph there is absolutely no reduction of incoming jitter below 8 Khz!!! Even worse, there is actual *amplification* of jitter prior to reaching 8 Khz to the tune of 2 db. The "peaking" is due to the way the PLL filter is designed and is outside of the scope of this conversation to explain why it occurs.
Even at 20 Khz, we only have a modest 6 db reduction of incoming jitter.
The serious reduction in jitter therefore is from 20 Khz and up. Why? Because it is those frequencies which cause us to not be able to extract the incoming *data* -- the PCM audio samples. That is the "mission critical" application in IT terms. If the receiver can't decode the incoming bitstream, your system breaks and we can't let that happen. So high-frequency jitter is eliminated and with it, a ton of ultrasonic noise and interference.
Measurements
You might say, this is all theory, who knows if it is true in practice. And that would be a fair question. Fortunately I happen to have data on this .
Here are the measurements of jitter reduction -- i.e. the same as above graph -- for all the devices that I tested for my Widescreen Review Magazine article:
Look at the mass market AVR performance when it comes to jitter reduction. Notice how there is no attenuation or slight amplification of jitter as the above data sheet showed. I show the response up to 10 Khz so the sharp attenuation in higher frequency is not there in my measurements. What is there of course, what we care about: Jitter reduction in *audible* frequency range.
Summary
We see that the common argument of audio frequency jitter being eliminated because we have buffering and clock regeneration simply does not hold water. The common/cheap implementation only gets rid of high frequency jitter so that it can reliably extract digital audio samples. It does little to filter out incoming jitter in audio band which is what we care about.
Of course the problem can be solved using skilled designers and budgets that are measured in tens of dollars as opposed to single digit.
One of the common arguments made against jitter mattering is that: "the data is buffered and clock regenerated in the DAC so jitter won't be there." This makes all the sense in the world. Once we capture the data and then push it out at our will, there shouldn't be a problem. Well, there is a problem. A serious one. Buffering and clock regeneration do not deal with jitter by themselves. I have explained this in words many times but this time I am bringing in some specific data to hopefully put this myth to bed (yeh, wishful thinking ).
Introduction
The way a clock is "regenerated" is to have a local oscillator (clock) that we can change its frequency to eventually match and track the incoming digital stream. As you may know, S/PDIF is a serial digital connection with clock and data intermixed. By using this circuit which is called a Phase Locked Loop (or PLL for short), we are able to extract a clock that is cleaner than the incoming one. This clean up allows us to capture the digital samples reliably.
Our job is not done however. We not only need to extract the data samples but also accurately match the incoming data rate (clock regeneration) without any timing variations in it getting to our DAC. Not doing so causes jitter to appear in the output of the DAC.
S/PDIF Receiver Clock Regeneration
Let's look at a Crystal Semiconductor CS 8416. From its data sheet: http://www.cirrus.com/en/pubs/proDatasheet/CS8416_F3.pdf, we see this measurement of its rejection of incoming jitter as it regenerates the clock using its PLL:
I have annotated their graph as to make it easier to understand.
The measurement shows how much of the incoming filter is reduced in amplitude (vertical scale) at each frequency of jitter (horizontal axis).
Note as I have indicated on the graph there is absolutely no reduction of incoming jitter below 8 Khz!!! Even worse, there is actual *amplification* of jitter prior to reaching 8 Khz to the tune of 2 db. The "peaking" is due to the way the PLL filter is designed and is outside of the scope of this conversation to explain why it occurs.
Even at 20 Khz, we only have a modest 6 db reduction of incoming jitter.
The serious reduction in jitter therefore is from 20 Khz and up. Why? Because it is those frequencies which cause us to not be able to extract the incoming *data* -- the PCM audio samples. That is the "mission critical" application in IT terms. If the receiver can't decode the incoming bitstream, your system breaks and we can't let that happen. So high-frequency jitter is eliminated and with it, a ton of ultrasonic noise and interference.
Measurements
You might say, this is all theory, who knows if it is true in practice. And that would be a fair question. Fortunately I happen to have data on this .
Here are the measurements of jitter reduction -- i.e. the same as above graph -- for all the devices that I tested for my Widescreen Review Magazine article:
Look at the mass market AVR performance when it comes to jitter reduction. Notice how there is no attenuation or slight amplification of jitter as the above data sheet showed. I show the response up to 10 Khz so the sharp attenuation in higher frequency is not there in my measurements. What is there of course, what we care about: Jitter reduction in *audible* frequency range.
Summary
We see that the common argument of audio frequency jitter being eliminated because we have buffering and clock regeneration simply does not hold water. The common/cheap implementation only gets rid of high frequency jitter so that it can reliably extract digital audio samples. It does little to filter out incoming jitter in audio band which is what we care about.
Of course the problem can be solved using skilled designers and budgets that are measured in tens of dollars as opposed to single digit.