Audible Jitter/amirm vs Ethan Winer

Beyond noise, you don't know if what you heard was true to the original. It could have been super distorted and you wouldn't know it. The only way to make sure you are linear down to the last bit is to measure it.

I'm sure it was super distorted! A sine wave that occupies just the lowest one or two bits is by definition totally distorted.

I don't know why you keep mentioning 100db. I have explained why that is not the right way to look at this.

All I have to go on is the graph I showed from Ken Pohlmann's book, repeated here for your convenience:

jitter.gif


There's also one from John Watkinson's Art of Digital Audio, attached below.

Let's review what the paper said:

"It was shown that the detection threshold for random jitter was several hundreds ns for well-trained listeners under their preferable listening conditions."

Now let's look at the AP graph I posted which was for a periodic jitter of just 7 nanoseconds, not "several hundreds" mentioned in the article:

If anything that confirms my point, that even when not way down at -100 dB, such artifacts are still not audible. I got the lower than -100 dB figure from Pohlman's graphs. I do understand the difference between individual components that soft, versus the sum of all components which is of course much louder. So I probably should have been clearer when I said the artifacts from jitter are 100+ dB down.

In practice, 60 or 70 dB is soft enough for artifacts to be inaudible even under the most favorable conditions. When I tested this I made a recording of a 100 Hz tone at nearly full scale, then added a 3 KHz sine wave that pulsed on and off at various levels below the music. These two frequencies are far enough part that masking is not a factor, and 3 KHz is where our ears are most sensitive. So this was a worst-case test favoring audibility. When the 3 KHz sine wave was 40 dB below the 100 Hz tone I could hear it start and stop. At 60 dB below the 100 Hz tone I could just barely hear it with the playback very loud. At -80 I could not hear it at any playback level.

So even in your AP example of jitter sidebands at -80, it makes sense to me that nobody could hear it. Especially at those high and nearby frequencies. Indeed, in the example for my AES video where I played that nasty noise below a gentle passage in my cello concerto, once the noise was 40 dB softer than the music I could no longer hear it at a normal (or any) playback level. BTW, is that AP example at -80 real or simulated? If real, what device did you measure?

Now let's again look at real music capture using my audio precision:

What am I looking at?

Per above, sounds like you want a situation where on purpose, jitter would not be audible.

I might not have been clear enough. Forget the -60 part unless you can show that jitter is ever that high in functioning gear. All I am asking for is an example where the amount of jitter from a cheap CD player or other consumer digital device is ever audible.

Show me where high frequency content is at 0db and I buy your arguments.

Then we're getting closer. Didn't you already acknowledge in your Post #8 that jitter is a fixed level below the signal, rather than steady as is noise?

--Ethan
 

Attachments

  • Jitter Noise, Joh.jpg
    Jitter Noise, Joh.jpg
    15.7 KB · Views: 856
BTW, is that AP example at -80 real or simulated? If real, what device did you measure?

Just for reference, this page states that the processor in the old SoundBlaster Live card has jitter around 110 ps:

http://ixbtlabs.com/articles2/multimedia/creative-x-fi.html

Creative Labs article said:
For reference, the 10Kx processors have jitter in the neighborhood of 110pSecs.

I have no idea if that's true, but if it is that means a typical $25 sound card has 1/64th the amount of jitter as whatever device is shown in your -80 dB example.

I'll continue to search for jitter specs for various sound cards, though so far it's been tough to find anything concrete!

--Ethan
 
I'm sure it was super distorted! A sine wave that occupies just the lowest one or two bits is by definition totally distorted.
I hope the larger point is not missed Ethan. Those low order bits are what represents the finer detail in the music. Without it, we might as well say that CD is overkill and we should have had a 14 bit system at 44.1Khz.

All I have to go on is the graph I showed from Ken Pohlmann's book, repeated here for your convenience:
The graph is showing the effect of a 2ns jitter. The paper said hundreds of nanoseconds is inaudible. So other than proving the same point that I did in that random jitter simply raises the noise floor, in what way is it proving your point?

It also says the system signal to noise ratio is now nearly 80db in the presence of jitter noise, not 100db. I personally don't feel comfortable with a system that has such low level of signal to noise ratio.

There's also one from John Watkinson's Art of Digital Audio, attached below.
That is also for random jitter, not for periodic or correlated jitter which are the two we also worry about. At the outset I mentioned that the ear is not as sensitive to broadband noise as it is to noise that comes and goes and/or is related to music. So neither one of these references are helpful in this regard.

I have already addressed random jitter showing how it is quite difference than periodic or program related jitter.

If anything that confirms my point, that even when not way down at -100 dB, such artifacts are still not audible. I got the lower than -100 dB figure from Pohlman's graphs. I do understand the difference between individual components that soft, versus the sum of all components which is of course much louder. So I probably should have been clearer when I said the artifacts from jitter are 100+ dB down.
I showed you graphs where the noise spikes were at -80db. If you want to stick to some graph, why not consider those also? I also showed you that music could be at that level. Note that there, I used real content showing that its high frequency level dips to the same level as the jitter.

In practice, 60 or 70 dB is soft enough for artifacts to be inaudible even under the most favorable conditions.
Really? Here is another graph I have saved up on my server of real music:

Zoomwave.JPG


You want to tell me that that decay is illegal and can't occur in real music? I assume not :). As that signal decays into nothing, we hear the room reverb and other cues which gives music its life and character. If the DAC is non-linear or jitter too high, you get an abrupt and distorted finish as that signal decays.

Sure, when the signal is loud such as in your example, none of this matters. We are not disputing that. Digital is king when it comes to loud signals. It achieves perfection that way in the way it is able to kill analog. Invert the equation though, and tables are turned. You have to look at how well digital can reproduce those fine details if you want to have luscious sound which can replace analog.

Again, you can't pick test tones or even sample music tracks to prove your point. Your point must be true of all music or it is not valid. We can't say this noise is this much db below music unless you can show that music can never be fainter than that. As I have shown, music can and is faint very often.

When I tested this I made a recording of a 100 Hz tone at nearly full scale, then added a 3 KHz sine wave that pulsed on and off at various levels below the music. These two frequencies are far enough part that masking is not a factor, and 3 KHz is where our ears are most sensitive. So this was a worst-case test favoring audibility.
It is not. It is the best case per above. We are not interested if we can hear faint noise when the music is blasting our woofers and ears. We are in agreement there. Where we are taking past each other is that you seem to be assuming that music is always that loud and that it only has a single component like the 100 Hz tone. In reality, music has high frequency detail whose level is very low relative to low and mid-bands. Yet that faint level conveys how "bright" the music is. Mess with those high frequencies even a little, and the tone changes.

That is a reason why compressed music can sound "bright." The increased quantization noise is not audible in the way you imagine it above, but instead, spreads into the high frequency bins and causes that increased edginess.

So even in your AP example of jitter sidebands at -80, it makes sense to me that nobody could hear it.
Would you say that is true if the signal is at -60db?

BTW, is that AP example at -80 real or simulated? If real, what device did you measure?
I did not measure the jitter diagram. It comes from other sites. AP graphs are real measurements as for simulations, we don't need that device. My measurements were done using a Blu-ray player feeding my AP.

Then we're getting closer. Didn't you already acknowledge in your Post #8 that jitter is a fixed level below the signal, rather than steady as is noise?

--Ethan
I am sorry but I don't understand the question. I have said that jitter has infinite profiles so it is not any one thing anyway. But that if you want to put it in buckets, you have three kinds:

1. Random jitter. This raises the noise floor and reduces the dynamic range of the system. At high enough level, it reduces fidelity of the system but it is not as bothersome as other forms below. All of your testing and papers you have cited fall in this category.

2. Periodic. This is one more more pure tones which change the signal timing. This could be the USB frame buffer timing, power supply noise, front panel high voltage oscillator, video clock related (e.g. as in HDMI that has video as master timing), etc. This can be more audible as each one of these tones modulates the music and creates sidebands that could fall within the music levels especially at high frequencies. I have shown examples of this type of jitter and consider it a potentially audible problem (depending on frequency).

3. Program related. This is jitter that is self-dependent on the signal itself! A good example is cable induced jitter. A poor digital audio interconnect, changes the shape of the pulses, causing the time they are accepted by the receiver to change. This again from Julian's book:

958465050_SzmU5-O.png


Unfortunately, those pulses change as the music changes, modifying the waveform as seen by the receiver and hence its timing. There, jitter comes and goes with music which can be most offensive to listeners. It also be very tricky to test for as you need to find conditions which aggravate it.
 
Those low order bits are what represents the finer detail in the music. Without it, we might as well say that CD is overkill and we should have had a 14 bit system at 44.1Khz.

I've heard 14 bit audio and it sounds fine to me. At least for music recorded at sensible levels. I'd never use less than 16 bits for a live classical music recording, and that's the one place where using 24 bits makes sense. Even 12 bits is pretty good for pop music when the music is already normalized. But that's all beside the point.

The graph is showing the effect of a 2ns jitter.

But is it a real measurement, or a simulation of what could happen if the jitter were that high? If it's a real measurement, what specific piece of gear is it showing?

Do we know how much jitter is typical in consumer level CD players and sound cards etc? So far all I could find is that spec for SoundBlaster cards that cited 110 picoseconds.

It also says the system signal to noise ratio is now nearly 80db in the presence of jitter noise, not 100db. I personally don't feel comfortable with a system that has such low level of signal to noise ratio.

That's not relevant for sidebands at very high frequencies, which is why A weighting is used to correlate noise with its actual audibility. Further, as yourself pointed out, no real music contains a 10 KHz tone at 0 dB FS!

At the outset I mentioned that the ear is not as sensitive to broadband noise as it is to noise that comes and goes and/or is related to music.

I don't know why distortion would sound worse, or be more noticeable, than noise. Assuming similar spectrums. If anything, distortion is masked by the signal. So it stops when the signal stops and thus seems less likely to be noticed. Again, I'm not talking about your past example of a loud blower fan your ears eventually ignore. I'm talking about stuff 60 or 80 or 100 dB below the music.

I showed you graphs where the noise spikes were at -80db. If you want to stick to some graph, why not consider those also?

I'll gladly consider that graph if those -80 sidebands represent what you could possibly get from normal functioning audio gear playing real music versus a 10 KHz test tone at full scale.

I also showed you that music could be at that level. Note that there, I used real content showing that its high frequency level dips to the same level as the jitter.

Okay, but that still doesn't mean it's ever audible! People often use the example of reverb tails that decay as an example of why 24-bit audio is "better" than 16 bits. I have tested this several times, and was never able to hear the "fizz" people describe as a reverb tail decays unless I cranked the volume much louder than normal during that decay. So yeah, music and decays can have very soft components. That doesn't mean the soft stuff is ever audible. This is why I keep returning to what is practical rather than theoretical.

Digital is king when it comes to loud signals. It achieves perfection that way in the way it is able to kill analog. Invert the equation though, and tables are turned. You have to look at how well digital can reproduce those fine details if you want to have luscious sound which can replace analog.

What analog tape or LP comes within even 20 dB of the low noise floor of 16-bit digital? Unless by "analog" you mean a direct console feed of the microphones before being recorded.

Again, you can't pick test tones or even sample music tracks to prove your point. Your point must be true of all music or it is not valid.

I agree completely! This is why I ask again and again for someone to provide an audio example showing that artifacts at a level typical for jitter are ever audible. As soon as someone does this I'll change my opinion in an instant. Not a graph of what could happen with broken or poorly designed gear having jitter 50 time more than usual. But actual music containing artifacts of the nature and level typical for jitter.

Would you say that is true if the signal is at -60db?

For those high frequencies? Probably! Now, at -40 I agree it could be a problem. But what gear has jitter artifacts at -80 let alone -60?

My measurements were done using a Blu-ray player feeding my AP.

Which specific measurements? I still don't understand what is being shown in that red and turquoise graph in your Post #20.

I have said that jitter has infinite profiles so it is not any one thing anyway. But that if you want to put it in buckets, you have three kinds:

This I what I keep asking for any example using any type of jitter having any spectrum. You can pick whichever "bucket" you feel best shows jitter as being audible. Pick the worst case you can find, as long as it's representative of actual jitter levels.

--Ethan
 
I've heard 14 bit audio and it sounds fine to me. At least for music recorded at sensible levels. I'd never use less than 16 bits for a live classical music recording, and that's the one place where using 24 bits makes sense. Even 12 bits is pretty good for pop music when the music is already normalized. But that's all beside the point.
It is not beside the point Ethan. It is the point!!!. Once more we are not discussing what is good for the general public. But rather, what the expectations should be for high-end audiophiles. And we are not talking about normalized pop music. But rather, well recorded music of all types.

Okay, but that still doesn't mean it's ever audible! People often use the example of reverb tails that decay as an example of why 24-bit audio is "better" than 16 bits. I have tested this several times, and was never able to hear the "fizz" people describe as a reverb tail decays unless I cranked the volume much louder than normal during that decay. So yeah, music and decays can have very soft components. That doesn't mean the soft stuff is ever audible. This is why I keep returning to what is practical rather than theoretical.
What you just described by turning up the volume was not theoretical. You turned up the volume, and heard the limitations of your system at 16 bits of resolution! That is not theoretical at all. What you say other people describe to you is precisely what I have been trying to explain.

Your assumption then is that we don't listen to soft passages at elevated levels but we do. I was just in my car driving and listening to my top songs which is about 1000+ tracks on a flash drive. The player just moves from folder to folder as it plays all the songs from each album. I was enjoying some wonderful piano music and the track changed to another album at such elevated level that I thought my doors were going to fall off! :D As you can imagine then, I was listening to the other track at increased volume which you accept can bring out distortion.

There should not be any requirement for the user to listen to loud music or at low levels for your theory of "jitter can't be hard" to be true. If digital is perfect, then it needs to be perfect all the time and not fall apart if I turn up the volume during soft passages.

You don't control what music people listen to and at what level. Why not then pay attentions that comprise system performance in all scenarios?
 
Do we know how much jitter is typical in consumer level CD players and sound cards etc? So far all I could find is that spec for SoundBlaster cards that cited 110 picoseconds.
110 picoseconds for sounblaster? If it were 10 times better I would give them a medal! Data like that is hard to come by in general. Here is the only piece I have from a UK audio magazine:

"In the Feb 2009 edition of the Hi-fi News magazine Paul Miller measured the following jitter results for a few A/V amplifiers:

Denon AVR-3803A
---------------
SPDIF: 560psec
HDMI: 3700psec

Onkyo TX-NR906
---------------
SPDIF: 470psec
HDMI: 3860psec

Pioneer SC-LX81
---------------
SPDIF: 37psec
HDMI: 50psec

Yamaha RX-V3900
---------------
SPDIF: 183psec
HDMI: 7660psec"

So if you are using HDMI, your jitter is about 10X of what is should be in most cases.

For those of you math challenged, 1000 picoseconds (ps) = 1 nanosecond (ns). So the Yamaha above has 7.6 ns of jitter or similarly to what was shown here:
figure_1_Periodic_Jitter_10khz.jpg


And before you ask again Ethan, that is a measurement, not simulation :). This means the Yamaha creates reduces your signal to noise ratio from 96db for 16-bit audio to 80db or 13 bits of resolution. You decide if you want to pay to get 16 bits of quality or 13. :D
 
What you just described by turning up the volume was not theoretical. You turned up the volume, and heard the limitations of your system at 16 bits of resolution! That is not theoretical at all. What you say other people describe to you is precisely what I have been trying to explain.

Okay, maybe it's possible to hear jitter and other artifacts if you turn the volume way up during a reverb tail or song fade-out. But the volume has to be raised a lot - much more than the difference between a soft passage and a loud track on your thumb drive. I'm talking about 40 dB or more gain, which would blow out your speakers when the normal parts of the music play. And at that point jitter is the least of one's worries, after hiss and maybe hum.

From my perspective, if you have to raise the volume 40 to 60 dB beyond normal to hear an artifact, then it's a curiosity but not a real problem. And certainly not a justification for analog fans to diss digital. Again, I don't disagree that designers (and consumers) should aim for the highest performance possible. My interest is only what's practical, and consumers paying $2,000 more for a device that promises lower jitter is never practical IMO.

--Ethan
 
Here is the only piece I have from a UK audio magazine:

Wow, I'm sure glad I use HDMI only for video and not for audio!

So if you are using HDMI, your jitter is about 10X of what is should be in most cases.

This is not an inherent problem with digital audio per se, but it sure is an eye-opener. Is this insurmountable and due to a limitation of HDMI? Or is it just sloppy engineering?

I'd still like to see a blind test with a dozen skilled listeners to know if anyone can ever actually hear that.

--Ethan
 
Is this insurmountable and due to a limitation of HDMI? Or is it just sloppy engineering?
As you see from the data presented, HDMI can be done right. But it is more challenging than an audio-only interface. HDMI slaves audio clock to video. Now you have a high frequency clock that you need to lock to in addition to having a ton more circuits on, creating their own jitter components due to crosstalk and other factors like it.

I'd still like to see a blind test with a dozen skilled listeners to know if anyone can ever actually hear that.

--Ethan
Until then, you could simply avoid the concern altogether and simply buy equipment with less than 250ps worth of jitter :D. For 16-bit samples that is....
 
Okay, maybe it's possible to hear jitter and other artifacts if you turn the volume way up during a reverb tail or song fade-out. But the volume has to be raised a lot - much more than the difference between a soft passage and a loud track on your thumb drive. I'm talking about 40 dB or more gain, which would blow out your speakers when the normal parts of the music play.
I gave an example where there was no "normal" part of music. The whole album was softly recorded. It was the next album which was recorded closer to 0db.

Let's also agree that while you may have needed 40db to hear those artifacts, others may need much less.

And at that point jitter is the least of one's worries, after hiss and maybe hum.
I hear such distortions well before I hear hiss or hum. We are talking about well recorded music and high performance audio systems here.

From my perspective, if you have to raise the volume 40 to 60 dB beyond normal to hear an artifact, then it's a curiosity but not a real problem. And certainly not a justification for analog fans to diss digital. Again, I don't disagree that designers (and consumers) should aim for the highest performance possible. My interest is only what's practical, and consumers paying $2,000 more for a device that promises lower jitter is never practical IMO.
$2K is very practical for our readership. If that is what it takes to be assured as to the best digital audio reproduction, is money well spent. If you said $200K, you would have a point but $2K is not much in the context of a system which would deploy such a good DAC. But really, we can't be making economic arguments. I don't own a Ferrari but I can't say it is a bad car because it is expensive. People don't need you and I to give them that kind of lesson :).
 
Until then, you could simply avoid the concern altogether and simply buy equipment with less than 250ps worth of jitter

Well, even the worst performer in the bunch was 13.7 times better when using the SPDIF output than your worst-case example of the Yamaha using HDMI. So the better lesson is to not use HDMI for audio. But your point is taken. I don't use HDMI for audio and wasn't aware how bad it could be. I'm still not convinced that such level of jitter would be noticed in a blind test. Too bad you live so far because I'd like to test this with you together in person. :D

--Ethan
 
Jitter Audibility Summation

Thanks Ethan for the fun debate. While I have argued these points on many other occasions, the one-on-one nature made it much easier to convey the information.

As I said at the outset, I don’t believe that it is possible to prove that jitter is audible in all conditions and for all people. Indeed, quite the opposite is true that for most people, most of the time, jitter and other ills of digital audio are not a concern.

Audibility of jitter is a tough topic because as a general rule, people don’t know what it sounds like. So often, that is the first reason put forth on why it doesn’t matter: “I have tested it and I don’t hear it.” To which I ask how they tested it and it becomes clear that they don’t know how to test for it.

We are naturally trained to hear analog domain distortions. Change the bass level and everyone can tell the difference. Change the volume, and they can tell that too. But what the heck does it sound like when the two lower bits in a 16-bit audio sample are not reproduced well? Does the frequency response change? No. Does the level change? No.

While some people are born with the ability to sense jitter artifacts, for others they need to be taught what it sounds like before they can spot it.

The way we train listeners in general to hear digital domain artifacts is to start with gross levels. Play that enough times and the ear and brain learn what it is and then a magical thing happens: you can hear it even when its level is reduced substantially! It is non-intuitive that way but in practice, it works quite well.

As Ethan noted, it sure would be nice to have a jitter box with a knob you can adjust so that the above training can occur. Alas, such a box doesn’t exist readily because jitter has infinite variations. And even if it existed, it would not something people everywhere would own.

That brings me to the subject of using compressed music as way to learn to hear such artifacts. As Ethan's reaction indicated, at first blush, it seems odd that hearing audio compression artifacts helps you with hearing things like jitter. But I can speak from personal experience in saying that it completely opened up my hearing ability to not only detect digital audio distortions but also many other subtle audio differences even under the constraints of double-blind environment. I think it has something to do with learning to find things that are wrong or different, rather than just listening to music as we all instinctively do (even when we think we are auditioning something).

There is another notable thing about compressed music: its distortion is correlated to the content. For example, if you have a transient, it gets distorted with grunge in front of it (we call that pre-echo). Feed it steady state signal and that doesn’t happen. Correlated distortion can be quite annoying. The brain tends to be bothered more by distortion that comes and goes than one that stays the same. This is why I said the paper should have used this type of jitter, not one that is akin to noise.

The second point often made is that today’s digital gear is so good that jitter and for that matter, any other digital distortion is immaterial. To which I ask, “how do we know?” The paper makes that claim without a single bit of back up. It doesn’t even show the measurement for one device let alone all devices.

We know the mathematics of jitter tells us that if we want to resolve 16 bits and allow a bandwidth of 20 KHz, jitter must be exceeding small -- in order of 250 ps (250 trillionth of a second). As small as that is, if you increase bandwidth (e.g. as in higher sampling rate audio) or higher bit depth (e.g. 24 bits instead of 16) the number has to be even lower! All the gear that I listed from HiFi magazine claim to play 24-bit, 192 KHz audio. Yet in most cases their jitter alone dictates that they can barely resolve 16-bit audio at 44 KHz sampling let alone anything higher. Most mass market gear is linear to 14 bits or so after which, they start to distort digital samples.

Of note, the 250 ps assumes sinusoidal jitter not because that is the type of jitter we have in audio equipment but because it is easier to do the math that way. Jitter could have very different profiles and as such, its level may have to be even lower than this simplified math shows.

The above discussion does away with the second objection against importance of jitter. That somehow, sampling theory says digital is perfect. Sampling theory does indeed say that but it assumes an idealized system that is free of distortions such as timing of samples varying by one or more factors.

So what does jitter, or for that matter, other digital distortions sound like? The first clue is that you will lose the sense of space around notes. This happens because the sense of space comes from reverberations which are far lower in level than the main signal (the sound weakens as it bounces around the space). Because their levels are lower, they rely far more on the low order few bits of the 16-bit audio samples to be accurate to sound good or be heard at all.

The second symptom is increased accentuated high frequencies which can sound harsher. This occurs for two reasons. One, jitter damages higher frequency signals more than low frequency ones. A low frequency signal doesn’t change as much over time so if timing varies, it is less impacted. High frequency signal jumps around more which makes it more important to reproduce samples when they are supposed to. Think of it as how sensitive your steering wheel is in the car when driving very fast as opposed to barely moving.

Second reason high frequencies are impacts is because jitter distortion created in lower bands, manifests itself at higher frequencies. If a 2K Hz jitter is applied to a 5 KHz signal, its artifacts show up at 3 KHz and 7 KHZ. The latter lands on top of fainter high frequency content in your music, causing its energy to increase even if it is not directly audible. That increased energy will make the sound brighter.

So the trick to hearing jitter easier is to combine the above two. Find a signal that is quiet (to get rid of masking effect), has ambiance, and some sharpness to it. A single pluck of guitar that goes to nothing is a good on as is a lonely cymbal crash. I suggest using headphones so that you remove room effects. Set up your player to loop on that note, turn up the volume to be loud so that you can hear the end of the decay but not so loud as to damage your hearing. Once there, modify things that can impact jitter such as turning off your AVR video circuits and front panel display. Or switch between DACs for transports. With practice, you should start hearing the artifacts.

At this point Ethan will say that if you have to turn up the volume, then it doesn’t matter. But as I said before, the above is for training. Once you learn the artifact, you will be able to spot it in lower levels. That said, there is no question that if your music is loud and playing at near 0db, jitter is not a consideration. As I noted in the posts, digital is perfect at playing loud. Ask anyone who plays their home theater even on cheap hardware and they rave about it shaking the floor! It is the subtle tones where digital starts to struggle some.

The good news here is that you can impact jitter. Changing to a digital transport or the way samples are moved from one place to the other can make a big difference. This is why people have invented things like “asynchronous USB” DACs for PCs. I showed example of changing from HDMI to S/PDIF or going from one brand to another.

While Ethan did not go there, many electrical engineers put forth another defense: why not read a lot of data into a buffer (piece of memory), and then output the samples one by one using a very accurate clock. Unfortunately we can’t do that outside of a studio. Our systems need to stay in sync with upstream sources and that forces the DAC to follow the timing errors of ahead of it, not its own independent clock.

Think of a Blu-ray player playing a concert. The video clock drives the system timing and audio samples are slave to it. There are X audio samples for each frame of video. Your AVR doesn’t get to buffer those audio samples in its memory and to send it out using its own clock. If it did that, it would over a few minutes, drift from video clock and you lose lip synch. Yes, this happens even with ultra-accurate crystal controlled clocks.

There are techniques such as dual PLLs to lock to incoming signals at high accuracy while filtering jitter. But such circuits tend to be expensive as they have to be discreet in nature. This is why I said that there is merit in higher end DACs in this space as that kind of attention is paid to jitter and low level digital distortions.

Note that measuring jitter is an expensive proposition. It requires equipment which costs $25K such as my Audio Precision analyzer. Be wary of people who make claims of jitter reduction who don’t have such equipment. Great example is the many companies which modify video players such as Oppo with better clocks, power supply and such. There is no guarantee that a bigger, better, linear power supply improves things. It may actually make things worse! Demand to see their before and after jitter specs including measurement charts.

Let me also touch again on the PC audio situation. PC can make a wonderful music source. Beyond all the convenience features, by ripping your music and putting it on the hard disk, you rid yourself of the clock that the DAC must lock to in the serial digital stream on the CD. Data is read asynchronously from the hard disk or solid state disk with no clock to speak of, putting you a step head.

Unfortunately, as soon as you output those samples, you are back to having a clock to go with it. That clock can be supplied externally or internally. In the standard method of playing something over S/PDIF, the clock is supplied by the PC and can be subject to jitter of all types. So you are back to having to deal with this issue. DACs that use async USB avoid this problem by having the external device drive audio samples in the PC.

Putting it all together, jitter is indeed a small level of distortion. It is not a problem for the general public or those with very limited funds to deal with it. For discerning audiophile though, it should be a consideration. Investing in good gear with good digital hygiene, can help eliminate it, bringing digital closer to its ideal characteristic of being transparent to the source.
 
Jitter Audibility Summation

This exchange with Amir has been a lot of fun! I really appreciate the chance to debate technical aspects of audio one-on-one without interruption. Versus what happens so often in other forums where people having nothing of substance hurl insults and worse.

Amir said "I don't believe that it is possible to prove that jitter is audible in all conditions and for all people." But that's never been the issue. All I ask for is evidence that any type of jitter is ever audible under any circumstance. It's also a logical fallacy to ask someone to prove a negative. The burden of proof is on those who make a claim. I have asked repeatedly for evidence that jitter is ever an audible problem. Not just here in this exchange with Amir, but I've asked for this for years. If a jitter proponent claims that jitter can be audible, all they have to do is prove it. Use any type of artifact you want. Correlated, uncorrelated, trebly buzz, bassy IMD-like grunge - anything you feel proves your position using the worst-case artifacts you can muster. Turn it on and off to make it even more audible if you feel that better proves the point. It doesn't have to be a test that I can pass either. If some people can recognize jitter more readily than others, I'm glad to have them prove it in a proper blind listening test. As far as I know this has never happened.

Amir said you can learn to hear jitter by changing between DACs, but that proves nothing. There could be other reasons two DACs sound different. Further, it's likely that two DACs that are reported sounding different do not in fact sound different. It's common knowledge that people will report hearing a difference even when nothing has changed. As I mentioned earlier, in a recent test I was involved with I goofed a file export, and two of the three files (B and C) being judged were in fact bit-identical. Yet half a dozen people reported obvious differences between the files. Here are just two examples:

"My opinion is that file B sounds significantly better than the other two files.
A sounded the worst to me, and C came in second place."

"I found the difference between A and C not so big as between B and A/C."​

I have tested artifact audibility many times, and have never been able to hear things 80 dB below the music while the music plays. Even under the most favorable conditions. So the "problem" associated with jitter is to me an extraordinary claim that in turn demands extraordinary proof. As far as I know, no such proof has ever been offered anywhere let alone in this thread.

I was surprised to learn that HDMI audio is as bad as that Hi-Fi magazine measured. I come from the pro audio world where people often claim that jitter is a problem with sound cards and converters. But I doubt that even the high levels of jitter the magazine measured over HDMI is ever audible. Amir said such levels of jitter reduce the s/n ratio, but that's not really true. A more accurate metric is to consider jitter a distortion artifact because it comes and goes with the music. Using Amir's Audio Precision graph showing jitter sidebands at "only" 80 dB down equates to a distortion figure of 0.01 percent. This is consistent with the distortion amounts one gets from high quality audio gear. And vastly lower than any loudspeaker I know of. Further, that AP graph is not realistic because it uses a 10 KHz sine wave at 0 dBFS as a test signal. As Amir himself said (Post #20), "Show me where high frequency content is at 0db and I buy your arguments."

I also dispute the notion that jitter can affect imaging and sound stage. In my experience as a professional recording engineer, the level for adding a just-barely noticeable amount of reverb is around -20 dB. Adding reverb at -40 is totally useless and does nothing. So the notion that jitter affecting the lowest bit or two can effect spaciousness is highly unlikely. I file such reports in the same bin as reports of a difference in an A/A test.

In the grand scheme of things, this is a consumerist issue. We are asked to pay more - often much more - for gear that claims lower jitter than the competition. But even cheap digital gear has jitter low enough to not be audible. Versus the obvious and highly audible difference between loudspeakers, not to mention poor room acoustics which the majority of "discerning audiophiles" blindly ignores.

I do agree with Amir that manufactures who claim low jitter need to prove it. If every unsubstantiated claim was accepted without question, we'd all sprinkle magic pebbles around the room, stick little paper dots on our windows, and carefully demagnetize all of our LP records.

--Ethan
 
Last edited:
Opening thread for comments and questions

Since both participants have submitted "summary" statements on this topic, we can now open up the thread to comments and questions from all interested parties. I ask that we keep this civil and impersonal as we proceed. If you don't agree with one individual, direct questions at the statement(s) with which you disagree, not at the individual.

We are fortunate to have industry experts in many fields here to discuss these matters, so please offer the utmost in respect for their kind donation of time and information.

So, who would like to inquire about anything contained in this discussion?

Thanks,

Lee
 
Really enjoyable to read a debate conducted this way without all of the clutter that you usually see on A/V sites with drive by graffiti, insults, and off topic distraction.
 
I already put in my two cents... :eek: I am working on doing some plots and explanation along the lines of the units and terms, sampling, and aliasing threads over in the Tech Forum. Not as a debate, but to (hopefully) help out some of us a bit confused by all the jitter talk around here... :) - Don
 
While we are waiting for questions :), I just wanted to clarify for Ethan that all of the testing I have mentioned in this thread were double-blind and repeatable in independent trials with the conditions changed to force an alternate outcome. The testing also included using the same DAC but changing digital source characteristics. While I am not claiming this rises to the level of convincing others, it certainly convinced me :). Doubts about reliability of DBT results when conducted by people in the industry with fair amount of rigor, can be used I am sure to fuel a different debate :D.

And a technical correction: when jitter profile is random as was in the paper and your graphs, it serves to raise the noise floor and impacts S/N. You can see that stated below your own graph as highlighted by me now:

961647064_JkqRZ-X3.gif
 
While we are waiting for questions :), I just wanted to clarify for Ethan that all of the testing I have mentioned in this thread were double-blind

This is interesting. Earlier in the thread you wrote...

The more rigorous tests I ran were all double-blind. For example, I would test the effect of turning off the video circuit in a digital source (e.g. DVD-A) by turning away from the gear, hitting the switch many times in a row so that I didn't know what state it was in at the end. And then toggle back and forth while listening.

I've seen you refer to tests like these as "double blind" before. If this is the kind of thing you mean when you say all tests were "double-blind" I think there should be some sort of asterisk to explain that when you say double blind, this is what you mean or else perhaps you should discontinue using the term "double-blind" as this is not exactly double blind. Personally, if I could cast a vote it would be for you to discontinue using the term "double blind" since an actual double blind test requires another level of "blindness." I don't think it helps in terms of clarity of debate to use a scientific term like "double-blind" unless it is truly double blind.
 
I've seen you refer to tests like these as "double blind" before. If this is the kind of thing you mean when you say all tests were "double-blind" I think there should be some sort of asterisk to explain that when you say double blind, this is what you mean or else perhaps you should discontinue using the term "double-blind" as this is not exactly double blind. Personally, if I could cast a vote it would be for you to discontinue using the term "double blind" since an actual double blind test requires another level of "blindness." I don't think it helps in terms of clarity of debate to use a scientific term like "double-blind" unless it is truly double blind.
What is true double blind in your opinion and what is not?
 
BTW, since I explained what I meant by the term, I don't know why you are asking for a footnote. What I said must have been clear enough for you to say it wasn't valid :).
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu