Audible Jitter/amirm vs Ethan Winer

Oops, I know. I had intended to quote more of Marty's post than just that one sentence, so I'll try to do it this time:

Ethan's key argument that jitter, if it is 100dB below the signal, must by definition be inaudible, reminds me of the old argument that an amplifier's performance at 40KHz must be irrelevant since we can only hear up to ~20KHz. We now know that not to be the case as the harmonics of signals at 40KHz may be heard in the audible range. The main reason I think jitter is likely audible is the recent work of some vehement anti-jitterholics such as Ed Meitner, whose recent effort , the XDS1, greatly impressed me.
Now I will reask Marty this question: how do we know that? How do we connect those 2 sentences? Where is the proof of audibility?
 
A question: Are the subharmonics generated in the ear? It takes a nonlinearity to generate subharmonic (mixing) products, and I am curious if the ear (or ear/brain system) does this. I do not know.

Curious - Don
 
A question: Are the subharmonics generated in the ear? It takes a nonlinearity to generate subharmonic (mixing) products, and I am curious if the ear (or ear/brain system) does this. I do not know.

Curious - Don

I'm sure that you will like this one:
http://lab.rockefeller.edu/hudspeth/research/Hopf_bifurcation

The ear is a non-linear system with the inner hair cells operating near dynamic instability. The threshold of hearing (4dB at 1kHz is commonly accepted in scientific circles) corresponds to air vibrations on the order of a tenth of an atomic diameter. This means that without the non-linear system of the ear, just the Brownian movement of the air molecules adjacent to the ear-drum would drive all of us batty.

The ear also produces sound in a non-linear function to form distinguishing interference to help improve sensitivity to micro-dynamic information way above the threshold of hearing.
http://emedicine.medscape.com/article/835943-overview

So to answer your question, yes and yes.
 
There is also a bit more information here: http://en.wikipedia.org/wiki/Ultrasonic_hearing

We've had a tiny bit of discussion about the claims made by Oohashi et al., and there has been a ton of discussion about those claims at AVS and hydrogenaudio. Suffice to say, the purported results of their testing and thus their claim of audibility have never been successfully repeated. To the contrary, there are more credible explanations of their results. If JJ was participating in this thread, he'd tear Oohashi and the true believers a new one.
 
I did not put that link up there as proof of anything but rather, answering Don's question about some sources on the topic. Wiki articles that are short are always a red signal to me and that one was no objection. If JJ has something to say about the topic, he would do well to go and edit that article.

Personally, I like to go 20% or so above 20Khz for two reasons:

1. There is some evidence of people hearing above 20 Khz and the modulation effects thereof. See below.

2. It gets rid of a lot of arguments that way :).

Going beyond 24-25Khz doesn't get my vote as I worry who can verify what the equipment is doing in that region.

The best read along these lines is from Bob Stewart of Meridian fame. His writing is rather pragmatic and nice middle of the road like mine. :D http://www.meridian-audio.com/w_paper/Coding2.PDF

"PSYCHOACOUSTIC DATA ON HIGH-FREQUENCY HEARING

There is very little hard evidence to suggest that it is important to reproduce sounds above 25kHz.
Instead there tends to be a general impression that a wider bandwidth can give rise to fewer in-band
problems. However, there are a few points to raise before dismissing audible content above 20kHz as
unimportant.

The frequency response of the outer and middle ear has a fast cut-off rate due to combined roll-off in the
acoustics of the meata and in mechanical transmission. There also appears to be an auditory filter cutoff
in the cochlea itself.

The cochlea operates ‘top-down’, so the first auditory filter is the highest in frequency. This filter
centres on approximately 15kHz, and extrapolation from known data suggests that it should have a
noise bandwidth of approximately 3kHz. Middle-ear transmission loss seems to prevent the cochlea
from being excited efficiently above 20kHz.

Bone-conduction tests using ultrasonics have shown that supersonic excitation ends up in this first ‘bin’.
Any supersonic information arriving at above 15kHz therefore ends up here, and its energy will
accumulate towards detection. It is possible that in some ears a stimulus of moderate intensity but of
wide bandwidth may modify perception or detection in this band, so that the effective noise bandwidth
could be wider than 3kHz.

The late Michael Gerzon surmised that any in-air content above 20–25kHz derived its significance from
non-linearity in the hearing transmission, and that combinations of otherwise inaudible components
could be detected through any resulting in-band intermodulation products.

There is a powerful caution against this. As far as the author knows, music spectra that have measured
content above 20kHz always exhibit that content at such a low spl that it is unlikely that the (presumed)
lower spl difference distortion products would be detectable and not masked by the main content."
 
Personally, I like to go 20% or so above 20Khz for two reasons:

1. There is some evidence of people hearing above 20 Khz and the modulation effects thereof. See below.

2. It gets rid of a lot of arguments that way :)."[/i]
I'm 100 [ or should I say 120? ;-) ] percent in agreement with you on number 1. Actually, I like your number 2 as well.
 
There's still no definitive answer to whether frequencies about 20kHz are needed. However, this study shows that with a crash cymbal, 40% of the total sound energy is above 20kHz.
http://www.its.caltech.edu/~boyk/spectra/spectra.htm

I've tried some experiments with recording jangling keys (68% of energy above 20kHz), sampling at 96kHz and I can always tell if I put in a filter at 30kHz. However, that may be because of my badly implemented filter, or my lousy recording technique.
 
I skimmed the articles and bookmarked them for more thorough reading later, thanks very much!

The Wiki stub seems to be the only one that actually delves into ultrasonics unless I missed it? I'm from Missouri; do these show me that we can actually hear (i.e. something in the ear responds) at 40 kHz? If e.g. the hair cells move enough, I could understand that they (or the brain) might mix down, but if their sensitivity is nil at that high a frequency then I am not sure it matters...

I am leaving aside the probability of one's speakers generating that high-frequency energy as irrelevant to my fundamental question. Nor is the question of the high frequency content of keys jangling, cymbals, instrumental overtones, etc. in doubt; just my ability to hear them!

The highest my hearing was ever tested was 22 kHz whilst I was in college, but judging my my more recent loudspeaker tests whiole setting up my system it's in the 10 - 12 kHz region now (at age 51).

BTW, how did we get to this from jitter?
 
There's still no definitive answer to whether frequencies about 20kHz are needed. However, this study shows that with a crash cymbal, 40% of the total sound energy is above 20kHz.
http://www.its.caltech.edu/~boyk/spectra/spectra.htm

I've tried some experiments with recording jangling keys (68% of energy above 20kHz), sampling at 96kHz and I can always tell if I put in a filter at 30kHz. However, that may be because of my badly implemented filter, or my lousy recording technique.

LOL I was about to mention Boyk but more as a suggestion that maybe someone should get an exceptional recording done of say several instruments playing (individually) a held note and then do AB testing with some people where one is cut off for CD while other is 24/96 or 24/192, if that had statistical differences then go AB with A having a higher filter but still below 30k and B still the 24/96 or 24/192.

The other aspect of the Boyk study that I find really interesting (more so for me anyway) is that it shows how incredibly complex the fundamental/harmonics are even for a single note (B flat in this case).
In the 1st chart for trumpet; up to 8khz it could be said the harmonics are peaking around 48db, at 30-32khz is peaking just above 26db to around 30db, while around the 20khz is peaking just above 30db.
In a way seeing how complex the waveform is for a single note makes me wonder just how comparable single tones are for testing audio hardware.

I do appreciate those are required for specific tests/troubleshooting/making sense of data/etc, but as I touched in other threads I am not sure how comparable they are truly perfect (that for me is the key) for us to correlate the hardware's reproduction performance to actual musical instrument, i.e. audibly different or not.
A quick easy example was comparing simple IMD related test tones to complex test signal that shows a very different performance picture.

Cheers
Orb
 
The other aspect of the Boyk study that I find really interesting (more so for me anyway) is that it shows how incredibly complex the fundamental/harmonics are even for a single note (B flat in this case).
In the 1st chart for trumpet; up to 8khz it could be said the harmonics are peaking around 48db, at 30-32khz is peaking just above 26db to around 30db, while around the 20khz is peaking just above 30db.
In a way seeing how complex the waveform is for a single note makes me wonder just how comparable single tones are for testing audio hardware.

True - music is incredibly complex, and the single tones, and even the usually used maximum length sequence (MLS) used by loudspeaker designers is not really sufficient. Square waves are also not good enough - the Fourier transformation of a square wave is the fundamental plus every even harmonic. The leading-edge sawtooth is a better signal to test equipment by - because the sawtooth is the fundamental frequency plus EVERY harmonic.

Thanks to Steve McCormack, I have a recording of every key of a grand piano. He made a recording where he tried to hit every single key with the same amount of strength and recorded it. Playing this through any system is usually a revelation - how some notes boom or are recessed show up incredibly well the system/room interaction.

Changing out speaker cables can change the interaction - which initially totally surprised me.

Sorry that this is getting far out of topic!!
 
Amirm, would you agree re: jitter --

1. certainly it CAN be audible (which I hope nobody doubted anyway),
2. more likely to be audible if correlated to signal ('deterministic' -- this too was already known, and the spectrum matters too)
3. more likely to be heard over headphones, using a revealing probe signal (also a no-brainer)

BUT

4. QUITE UNLIKELY to be the culprit, with *music* played over *loudspeakers*, i.e, the sort of situation typical when jitter is blamed for bad sound by 'high end/audiophile' listeners
 
Amirm, would you agree re: jitter --
I smell a bait coming. :D

1. certainly it CAN be audible (which I hope nobody doubted anyway),
I think some in the thread doubt it. But I take a win when I can :D.
2. more likely to be audible if correlated to signal ('deterministic' -- this too was already known, and the spectrum matters too)
yes although it is better to say anything but random jitter is more audible.

3. more likely to be heard over headphones, using a revealing probe signal (also a no-brainer)

BUT

4. QUITE UNLIKELY to be the culprit, with *music* played over *loudspeakers*, i.e, the sort of situation typical when jitter is blamed for bad sound by 'high end/audiophile' listeners
I have never attempted to test it with speakers. So I have no data to share there.

It is true that headphones provide some advantages:

1. The other speaker can't mask the distortion from this channel.

2. You eliminate room effects, allowing you to hear low level detail more clearly.

3. You can turn up the volume more than you would with speakers.

All of these do make speaker testing harder. Does it make it impossible? Again, I don't know for sure. What I do know is that when I listen to people telling the difference between DACs, half the time they don't make sense to me, but the other half, kind of does. For example, saying you can hear "air" around music can be translated to low order bits being reproduced or not. So I would say there is some chance, some people hear the difference with speakers.
 
...Thanks to Steve McCormack, I have a recording of every key of a grand piano. He made a recording where he tried to hit every single key with the same amount of strength and recorded it. Playing this through any system is usually a revelation - how some notes boom or are recessed show up incredibly well the system/room interaction...

I am very interested in this. Is it possible to get a copy from somewhere?
 
I'll ask him if I can release it to this forum - and I should be probably be able to attach it as a WAV?

Not sure if a thread on jitter is an appropriate place to put it though.... Mods?

Or, I am willing to pay a reasonable amount since it seems like a piece of intellectual property that has value. In what circumstances was it recorded? (Talking about the piano file.)
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu