AQ Jitterbug Measurements

So the plot thickens :). The results of HiFi news measurements runs counter to principles of electronic signals. As I explained in the 44 Khz thread in the science forum, a square wave has infinite bandwidth in theory. That is how it can get its sharp transitions on the edges. Any filtering as such, will reduce the waveform's fidelity, not improve it. Rise time for example should suffer because any filtering will impact the edge, reducing its slope, not increase as was reported in the review measurements.

So being the person that I am :D, I took the two measurements from the review, changed the color of the one that used the AQ Jitterbug to blue, and overlaid the two in photoshop. Here are the results:

i-gbmWK7R-X2.png



Notice two things that AQ has done:

1. The flat portion of the waveform is now tilted down (left part of it is "tilted up" as I noted on the graph). What causes this? Boosting the low frequencies of a square wave. Here is a great web site on electronics in general, and this topic in the specific. This is the original square wave:http://sound.westhost.com/articles/squarewave.htm

sqr-f2.gif


And this is what happens when you boost the low frequencies by 6 db:

sqr-f8.gif


As you see the effect very much resembles what AQ has done to our USB pulse train.
I'm not so sure about your interpretation of the waveform, Amir - if the bass was boosted the right side of the waveform should be above the original level when you overlay the graphs. This is not the case from eyeballing the overlaid graphs - instead the graph seems to shows a plot more like a HF filter as per this graph of a 1.59KHz filter from westhost
sqr-f3.gif

This might concur with what I suspect is in the Jitterbug - some common mode chokes on the data lines which I suspect would act as HF filters & some inductors on the VBUS & possibly ground line.

2. The rise time is how fast the voltage rises from 10% value to 90%. This is a hard measurement to perform when you have jittery, noisy signals. You either perform a statistical analysis or eyeball it. Using the latter, to my eyes, both waveforms start at the same point at the bottom left, but by the time they get to the top, the AQ waveform on the average is actually lower than doing nothing (in red). The no AQ waveform in red rises to higher value on the average than the blue with AQ.

Seems to me then that the rise time was better without this device which is what the theory predicts.
It's difficult to evaluate rise/fall times just from these graphs. I reserve analysis of this but, irrespective of eye patterns, a less jittered signal may well result in faster edge rates - I think the analysis jury is still out on this one.

It is also odd that Paul says that without AQ, the measured rise time was 22 nanoseconds. The limit for compliance with USB spec is 20 nanoseconds.
Are you sure about this compliance figure? I only know of USB compliance with regard to fastest edge rates allowed to be 100pS & a warning at 300pS but not a maximum value for edge rates (although I'm sure there is one)?
I have a hard time imagining such a short cable, as generic as it may have been, to have failed compliance. With AQ, Paul says the rise time shortened to 14 nanoseconds. Per above, I don't see how that happened. Nor would the theory predict that this kind of improvement after filtering a square wave (unless there was some improved impedance matching).

My theory is that Paul may have swapped the two values. I expect the cable with no EQ to have the compliant rise time of 14 nanoseconds and the AQ worsened that to 22, not the other way around as reported. This would also agree with the previous review that we discussed where my read was that the AQ made his cable unreliable.

Of course all of this is based on some fuzzy pictures so accuracy is not there. But I wish Paul would have put the cursor markers on the waveform as is customary for such measurements to see what points he had used to compute the 10% and 90%. Paul's work is impeccable in quality so I don't expect him to have made such a mistake. But the data is hard to rationalize otherwise.
Yes, I would concur - more testing needed with more accuracy

Anyway, as I said, all of this is immaterial in grand scheme of things.
Not sure why you say this - these measurements may or may NOT signify the underlying mechanisms but I think it's premature to say they are "immaterial in grand scheme of things"
 
Last edited:
Amir,
the eye pattern results showed how without the Jitterbug the PC+USB cable was on the limits of the USB spec in Paul Miller's measurements.
With the Jitterbug this improved, however you are right that relevant filtering does influence (can be bad or good) the eye pattern; if you look back to the document I linked several times now it explains how this happens - they specialise in noise/emi-rfi suppression in a diverse range of products including PC related architecture (including USB), they point out a few key factors with USB and data rates, which will definitely be more important with the latest standards.
Cheers
Orb
 
Amir said:
It is also odd that Paul says that without AQ, the measured rise time was 22 nanoseconds. The limit for compliance with USB spec is 20 nanoseconds. I have a hard time imagining such a short cable, as generic as it may have been, to have failed compliance. With AQ, Paul says the rise time shortened to 14 nanoseconds. Per above, I don't see how that happened. Nor would the theory predict that this kind of improvement after filtering a square wave (unless there was some improved impedance matching).

My theory is that Paul may have swapped the two values. I expect the cable with no EQ to have the compliant rise time of 14 nanoseconds and the AQ worsened that to 22, not the other way around as reported. This would also agree with the previous review that we discussed where my read was that the AQ made his cable unreliable.

Of course all of this is based on some fuzzy pictures so accuracy is not there. But I wish Paul would have put the cursor markers on the waveform as is customary for such measurements to see what points he had used to compute the 10% and 90%. Paul's work is impeccable in quality so I don't expect him to have made such a mistake. But the data is hard to rationalize otherwise.

Anyway, as I said, all of this is immaterial in grand scheme of things.
Why is this odd Amir?
As I keep saying understanding the passive-active components and environment is crucial for any testing, especially digital.
He states in the review that he used the setup utilised to test a diverse range of USB cables back in 2014 and investigating USB back in 2013.
So yes he has a diverse range of cables where some are on or just outside the limit, as I said earlier read the review at audiostream and Michael also had a cable that was probably on the limits; without the Jitterbug and using that cable he could not do higher sampling rates, with the Jitterbug he could.
Back in 2013 Paul Miller also showed how jitter varied a fair bit between cables from several non-branded ones to also audio named one; ironically one of the best cables he has had was a non-branded one but this is no longer available in any form.
Also he looks to test this in the the right circumstances for measurements; so for the Melco N1A this involved DAC products that can run from battery, here he suggests the focus should be DACs that require the 5V (I think I also mentioned that).

It is a complex situation because not all laptops/PCs will be equal in terms of noise/suppression, compounded by the USB bus/grounding and also the DACs/receiver chips/isolation/etc.
Cheers
Orb
 
Hold on guys are we mixing up the compliance for USB cable delay of 26nS with edge rates??

You will find edge rate complaiance figure for USB here - as I said only fastest edge rates are specified (100pS is the absolute limit & 300ps a warning limit) - no slowest edge rates specified although cable delay of 26nS is specified
 
The primary claim being made with the Regen and Jitterbug is that jitter is reduced. I noticed that the Wyrd does claim to improve drop-outs but says nothing at all about lower jitter.

I thought the Jitterbug primary claim is as a "dual filter" focused upon noise for the VBUS and USB datalines?
Indirectly it may influence jitter.
Cheers
Orb
 
Thanks for posting the graphs, Amir

Just a question - what do you think an eye pattern shows & how do you interpret it's acceptability "in this context"?

My understanding of these eye patterns - what you are seeing in these graphs is a repeated overlay of the scope trace of the waveform.
If there was no jitter each scope trace would exactly overlay the previous scope trace & the lines on the graph would be very thin
With timing differences (jitter) between the waveforms the overlays make lines appear wider in the graph.
The idea of an eye pattern is that USB compliance has a maximum allowable amount of jitter (jitter budget) which will not cause data errors
The eye pattern graph is a diagrammatic representation of this allowable jitter budget - the idea being that at a certain point of closure of the eye the compliance fails. There is a template to overlay on these eye graphs that can be calculated & allows one to judge if compliance is reached or not

But this is about compliance. Are there any guidelines about changes in this jitter pattern Vs audibility? Can changes in this jitter be written off so non-chalantly as irrelevant to audibility?

Remember also that the eye pattern is not showing the spectrum of the jitter just a feel for the overall level of it

Some further information about rise/fall time compliance in USB from here

"There has always been a problem accurately measuring rise and fall times, especially on high speed devices. The measurement of interest is the edge rate, or slew rate, during the state change time. To help improve accuracy of the measurement, the USB-IF is standardizing on one test fixture for high-speed signal quality.

Aside from the fixturing and probes used to take the measurements, major contributors to the inaccuracies in these measurements are the shape of the edge, noise on the signal and the method of calculating the 10% and 90% points as defined in Sections 7.1.2.1 and 7.1.2.2 of the USB 2.0 Specification.

A waveform with slow corners (see sample eye diagram below) will result in a measured rise time that is slower than the actual edge rate would indicate. Also a small change in the position of the 10% and 90% points due to noise on the signal, etc., can cause a relatively large change in the measured rise time. "

View attachment 22123

"The relaxed edge rate values of 300ps and 100ps apply for high-speed USB signaling"

As I said before - is eye pattern a good measurement for correlation with audibility? I think someone answered yes & used the JA measurements as evidence but I need to look into this further
Yes and no, because with the Paul Miller measurement he states deterministic jitter is broadly the same from the eye pattern measurement, but analysing jitter at the end of a DAC with and without the product shows subtle jitter deviation.
Key point though is that he ensured using a DAC that takes in the 5V.
In reality the affect on users is going to be between the two extremes of nothing improved to what happens with a DAC accepting the +5V, and what hardware (the internal design of the PC-laptop or comparable) is the source and its design scope for noise/ground/etc.

So if anyone is looking to test, they really need to think this through very carefully from multiple PC-laptop orientated products, multiple USB cables if one cannot measure eye pattern, and critically multiple DACs where some need the +5V and others at the extreme of galvanic isolation.
Cheers
Orb
 
Amir,
the eye pattern results showed how without the Jitterbug the PC+USB cable was on the limits of the USB spec in Paul Miller's measurements.
The eye pattern doesn't show the rise time so we can't determine that. What it does show to my eye at least is that the eye pattern got worse, not better with AQ:

i-sQz6gmL-X2.png


Without AQ, on top, we have flat horizontal sections (within noise margin). With AQ, they are now tilted and distorted. That is not an improvement, but a degradation.
 
Ok maybe we are looking at how to interpret rise time from the eye pattern differently (I am basing mine from telecoms).
From his graph, look at the red line as it is rising just before and just beyond the bit period crossover.
Without the Jitterbug you can see that the it takes longer, his scale to fit in the publication does not help much though to see such a subtle difference :)
Using an older document here from one of the component manufacturers so it goes beyond my own words:
Rise time is a measure of the mean transition time of the data on the upward slope of an eye diagram.
The measurement is typically made at the 20 and 80 percent or 10 and 90% levels of the slope.
Yes using noise filters on the USB will 'distort' the eye pattern but can be good and bad, will repost that manufacturer document I linked earlier.
Cheers
Orb
 
Ok maybe we are looking at how to interpret rise time from the eye pattern differently (I am basing mine from telecoms).
From his graph, look at the red line as it is rising just before and just beyond the bit period crossover.
Without the Jitterbug you can see that the it takes longer, his scale to fit in the publication does not help much though to see such a subtle difference :)

But it doesn't. Here are the two overlaid again:

i-gbmWK7R-X2.png


Both waveforms have the same peak values (visually). And it is clear that the AQ version in blue takes longer to get there compared to red without.

Let's look at the 1.0 volt threshold for example. Go to the right and you see that the red line is to the left meaning it took less time to get to that value compared to AQ one in blue.
 
Yes and no, because with the Paul Miller measurement he states deterministic jitter is broadly the same from the eye pattern measurement, but analysing jitter at the end of a DAC with and without the product shows subtle jitter deviation.
Key point though is that he ensured using a DAC that takes in the 5V.
In reality the affect on users is going to be between the two extremes of nothing improved to what happens with a DAC accepting the +5V, and what hardware (the internal design of the PC-laptop or comparable) is the source and its design scope for noise/ground/etc.
So if anyone is looking to test, they really need to think this through very carefully from multiple PC-laptop orientated products, multiple USB cables if one cannot measure eye pattern, and critically multiple DACs where some need the +5V and others at the extreme of galvanic isolation.
Cheers
Orb
Sorry, Orb, I don't understand what part of my post you are responding to? Is it my last statement? If so then I'm not following what you are saying
 
Are you sure about this compliance figure? I only know of USB compliance with regard to fastest edge rates allowed to be 100pS & a warning at 300pS but not a maximum value for edge rates (although I'm sure there is one)?
USB 2.0 has three speeds. The 20 nanosecond is for Full Speed (FS) mode which Paul is measuring. High-Speed requires 500 picoseconds but that is not the mode he is using/measuring.
 
Sorry, Orb, I don't understand what part of my post you are responding to? Is it my last statement? If so then I'm not following what you are saying

Sorry was a mix to you and in general.
The yes and no was in response to part of your post and the question of how useful the eye measurement is in terms of jitter/noise, it cannot be used on its own, which is why those who did one of the regenerators used associated digital and analogue measurements.
If you look back at the Murata document I linked earlier (page 13), they show an example where utilising their filters correctly shows no difference from an eye pattern perspective, but it does reduce noise emissions roughly by 5dB - emphasis is we are talking noise measured in MHz-transmission and not audio.
Cheers
Orb
 
Last edited:
Amir,
again comes down to interpreting the eye pattern and critically slow corners that if not taken account for will give a wrong measurement.
As you have an AP you should have access to the Agilent documents?
Also I think this is touched upon in one of the USB compliance mandates (will need to check for update-addendum with the mandate) - sorry JKeny I really should look at the links you provided as it could be in there.

Cheers
Orb
 
USB 2.0 has three speeds. The 20 nanosecond is for Full Speed (FS) mode which Paul is measuring. High-Speed requires 500 picoseconds but that is not the mode he is using/measuring.

And that is the challenge mentioned by Murata, that ideally you want different filter implementations between FS and HS as you will get deformations, so a balance if implemented of the pros/cons (I need to recheck about end of packet criteria).
But as seen by those using the product we have heard of no negatives so far, while we do know that it actually did mean one cable that used to fail at higher sampling rates could then be used as mentioned by Audiostream (so a positive).

Cheers
Orb
 
USB 2.0 has three speeds. The 20 nanosecond is for Full Speed (FS) mode which Paul is measuring. High-Speed requires 500 picoseconds but that is not the mode he is using/measuring.

OK, I assumed it was HS USB we were talking about - most USB audio devices now operate at HS.
Why did PM test & measure FS USB device & not the more commonly used HS USB - as Orb says the filtering requirements are different for FS Vs HS & surely the Jitterbug is designed for HS use? The operating speed of the Dragonfly isn't FS is it ? Oops, look like it is :)
This makes the measurements sufficiently non-representative to be of little value & I would suspect outside the use case that the jitterbug was designed for.
 
Last edited:
OK, I assumed it was HS USB we were talking about - most USB audio devices now operate at HS.
Why did PM test & measure FS USB device & not the more commonly used HS USB - as Orb says the filtering requirements are different for FS Vs HS & surely the Jitterbug is designed for HS use? The operating speed of the Dragonfly isn't FS is it ? Oops, look like it is :)
This makes the measurements sufficiently non-representative to be of little value & I would suspect outside the use case that the jitterbug was designed for.
Be interesting to check how many DACs are FS or HS and utilising the 5V.
That said in theory the concept would still be applicable to HS, and without knowing what manufacturer's filters and components they used we cannot tell if it also works for that, so deformation guaranteed but filtering a pro if they considered both situations.
Cheers
Orb
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu