Do we hear differently?

The analogue inputs of your speakers get processed how? Do they go through an A/D stage & then through your DAC in the speakers?

The do not. They go analogue directly to the preamp.

Tim
 
A few comments:

1. Knowing if a DBT is done right can be super challenging. The level of knowledge required can be immense. Even trivial things are often missed. Take Meyer and Moran test of high-resolution audio. They did not think about doing a spectrum analysis of their titles to know if they did indeed have high-res spectrum. They trusted the titles as bought to be such. Take the often cited paper on jitter that used random jitter for audibility test. This is the least interesting test case as random jitter just adds noise and not distortion. Who knows much about jitter past the word itself?

2. More data is always good. Even if you don't swear by double blind test, I hope you get convinced to listen to the evidence. I read all such data religiously. Often some learning is there even if there are flaws.

3. People on both sides fail to read the fine print. I was having an argument with Arny on jitter (what else is new :D). He kept pointing to a Dolby test on jitter. I go and look up the test. Guess what? It was a sighted test! :eek: Arny had just assumed it was double blind. Yet the user sat there with a jitter knob and adjusted it until he couldn't hear it and then the level was recorded.

4. I think hearing data-dependent, and some non-linear distortion requires a person to know what they are searching for, distortion wise. In these cases, general user listening tests may not suffice in determining if the quality difference does not exist. Great example is audio compression. Expert listeners are able to hear distortions that vast number of users do not hear. It would be incorrect to conclude the distortions are inaudible because 1,000 people couldn't hear it.

Where I net out is that I take such data seriously and try to see if there is enough validity in there for me to change my audio position. If there is, I do that. Latest version is speaker testing. The thought of scoring different speakers was odd to me. But once I did it, and saw the results of others doing it, I became a believer in its value.

So give a little to the other guy's position :). I say this as someone who has spent two decades professionally in audio and having participated and created more double blind and subjective tests than I can count. :) :).
 
A few comments:

1. Knowing if a DBT is done right can be super challenging. The level of knowledge required can be immense. Even trivial things are often missed. Take Meyer and Moran test of high-resolution audio. They did not think about doing a spectrum analysis of their titles to know if they did indeed have high-res spectrum. They trusted the titles as bought to be such. Take the often cited paper on jitter that used random jitter for audibility test. This is the least interesting test case as random jitter just adds noise and not distortion. Who knows much about jitter past the word itself?

2. More data is always good. Even if you don't swear by double blind test, I hope you get convinced to listen to the evidence. I read all such data religiously. Often some learning is there even if there are flaws.

3. People on both sides fail to read the fine print. I was having an argument with Arny on jitter (what else is new :D). He kept pointing to a Dolby test on jitter. I go and look up the test. Guess what? It was a sighted test! :eek: Arny had just assumed it was double blind. Yet the user sat there with a jitter knob and adjusted it until he couldn't hear it and then the level was recorded.

4. I think hearing data-dependent, and some non-linear distortion requires a person to know what they are searching for, distortion wise. In these cases, general user listening tests may not suffice in determining if the quality difference does not exist. Great example is audio compression. Expert listeners are able to hear distortions that vast number of users do not hear. It would be incorrect to conclude the distortions are inaudible because 1,000 people couldn't hear it.

Where I net out is that I take such data seriously and try to see if there is enough validity in there for me to change my audio position. If there is, I do that. Latest version is speaker testing. The thought of scoring different speakers was odd to me. But once I did it, and saw the results of others doing it, I became a believer in its value.

So give a little to the other guy's position :). I say this as someone who has spent two decades professionally in audio and having participated and created more double blind and subjective tests than I can count. :) :).

All good points, Amir, all valid. But I know myself, and I will judge the verbal communication skills of that voluptuous redhead much more objectively if I can't smell her perfume. :)

By the way, do you know how many of the samples in Meyer & Moran were not legitimate hi-res?

Tim
 
Excellent & enlightening post, amir, as usual :)

I agree with all you say. My posts were intended to do two things:
- firstly to stop people from always citing DBT as the gold standard for evaluation - without some rigour behind them they are no more useful than anecdotal. I also gather as much info as I can when looking into a product & anecdotal evidence is just as valuable particularly when large numbers of individuals in different countries with different systems come to the same general conclusions (once I have evaluated the types of anecdotal evidence) For instance, if the evidence is giving detail about what the person heard rather than platitudes, I pay more attention. I, in fact, believe that this sort of evidence is actually of more value than DBTs.
- I also wanted people to realise that they are only eliminating one set of biases - there are others - a lot of which they will not be aware. Is sighted knowledge the major bias? That's the questions I posed. There possibly are only qualified answers to this but it does bear thinking about rather than assuming that DBT is "IT"

As you pointed out the devil is in the details both in carrying out these listening tests & in carrying out measurements - the detail of what is done, how it is done, what the underlying premises are, etc are all of importance. Just citing DBT or measurements as the final arbiter is as blinkered as ignoring them in your evaluation
 
I also wanted people to realise that they are only eliminating one set of biases - there are others - a lot of which they will not be aware.

We'll have to agree to disagree on this one without any data, John. "Unsighting" listening, even informally, eliminates several bias, which I have named. The others I'm not aware of? I'd like to know what you think some of these might be?
 
We'll have to agree to disagree on this one without any data, John. "Unsighting" listening, even informally, eliminates several bias, which I have named. The others I'm not aware of? I'd like to know what you think some of these might be?

Tim, rather than get into specfics, ask yourself, if there weren't a number of other possible biases, why would people spend so much money to conduct scientifically rigorous DBTs?
 
By the way, do you know how many of the samples in Meyer & Moran were not legitimate hi-res?

Tim

Source
The music

While this list is not complete, most of the tests were done using these discs.

Patricia Barber – Nightclub (Mobile Fidelity UDSACD 2004)
Chesky: Various -- An Introduction to SACD (SACD204)
Chesky: Various -- Super Audio Collection & Professional Test Disc (CHDVD 171)
Stephen Hartke: Tituli/Cathedral in the Thrashing Rain; Hilliard Ensemble/Crockett (ECM New Series 1861, cat. no. 476 1155, SACD)
Bach Concertos: Perahia et al; Sony SACD
Mozart Piano Concertos: Perahia, Sony SACD
Kimber Kable: Purity, an Inspirational Collection SACD T Minus 5 Vocal Band, no cat. #
Tony Overwater: Op SACD (Turtle Records TRSA 0008)
McCoy Tyner Illuminati SACD (Telarc 63599)
Pink Floyd, Dark Side of the Moon SACD (Capitol/EMI 82136)
Steely Dan, Gaucho, Geffen SACD
Alan Parsons, I, Robot DVD-A (Chesky CHDD 2003)
BSO, Saint-Saens, Organ Symphony SACD (RCA 82876-61387-2 RE1)
Carlos Heredia, Gypsy Flamenco SACD (Chesky SACD266)
Shakespeare in Song, Phoenix Bach Choir, Bruffy, SACD (Chandos CHSA 5031)
Livingston Taylor, Ink SACD (Chesky SACD253)
The Persuasions, The Persuasions Sing the Beatles, SACD (Chesky SACD244)
Steely Dan, Two Against Nature, DVD-A (24,96) Giant Records 9 24719-9
McCoy Tyner with Stanley Clark and Al Foster, Telarc SACD 3488
 
We'll have to agree to disagree on this one without any data, John. "Unsighting" listening, even informally, eliminates several bias, which I have named. The others I'm not aware of? I'd like to know what you think some of these might be?
I will name one: experimenter bias. Say I want to make sure you can't tell the difference between a sports car and a quiet mid-sized car. I want to show that you are wasting your money buying the expensive sports car. So I set up a test where we don't measure acceleration, corning or braking. Instead, I have you slowly speed up and slow down and ask you which one you like better. You may then pick the quiet mid-sized car. I accomplish my goal even though I abide by all of the rules of proper testing.

If had to pick one major beef with DBT tests of audio, that is it. Someone sets to prove XYZ has no value. Without understanding what it takes to find said difference, they jump to the execution stage and wouldn't you know, they find no difference. They then stop because that is the results they wanted.

Such is the issue with counting on negative outcomes. Going by your analogy of a woman on the phone, what if the assertion was that all women are just as beautiful and yet, you conducted the test that way?
 
Going by your analogy of a woman on the phone, what if the assertion was that all women are just as beautiful and yet, you conducted the test that way?
What if I was biased towards men :)
 
Tim, rather than get into specfics, ask yourself, if there weren't a number of other possible biases, why would people spend so much money to conduct scientifically rigorous DBTs?

I'm pretty big on specifics.

Tim
 
I will name one: experimenter bias.

Indeed, that is one that is more powerful than whether there is any visibility of the devices!
 
What if I was biased towards men :)
You caught me with my experimenter bias or should I say assumption. :) Reminds me of a story. It was 30 years ago, and we were working on this piece of software that was developed by a university. A few months later we find that one of the two people who developed it had finished his study and was looking for a job. So we, and 10 other companies went chasing him. My boss at the time was pretty clever and was not beyond getting an unfair advantage. We had this guy in the group which was a party animal and into dating scene with girls like you would not believe. So my boss asks him to take the researcher out for a good time after work. The researcher comes and I do my thing technically during the day which went really well. The day ends and the guy takes him out to dinner/party around town. We are thinking this is a done deal now.

The next day we are all anxious to know if the man had sealed the deal. Our guy explains all the places he took him, including the top night clubs and such, yet, he could not get him excited at all. He then says at the end of the report, "he turned to me and asked, 'you do know I am gay, right?'". Oh well. :D Needless to say, he chose to go work somewhere else. Talk about not testing your assumptions!
 
Indeed, that is one that is more powerful than whether there is any visibility of the devices!

It is if someone is deliberately trying to throw the test. If what you're doing, however, is comparing things unsighted, at home, for your own education, not trying to prove anything to anyone, you have no reason to deliberately throw the test. And in that case, it doesn't hold a candle to the brand, prestige, looks and price biases that are engaged the moment you open your eyes. I still think you're really overreaching, John.

Tim
 
Not throwing straw, just ... Ummmmmmmmmmm, ooooooooooo k.

Hardly anyone tries to fly to the moon, yet we do know this has been done. I mean, really..

[Pardon while I duck the straw! (Smile)]
It is important to distinguish between DBTs such as the Meyer and Moran study from that which you might (hopefully:)) one day undertake for yourself. Even if we did have that perfect test, that 80, 90 or even 100 percent of a control group fails to satisfy the confidence level does not, in and of itself, mean you also will not[ I see the exception proves the rule]. Like you like to say, one cannot (necessarily) generalize to the specific. So test the specific. Take the test for yourself.

Yet another test challange. As far as me taking the test I refer you to the "lucky" coin "or (unlucky depending on how you score) Yes the dgree of confidence is a function of the number of participants and trials. Indeed in order to not be just a straw poll there woul.d have to be a true demographic.

I see it's not the process of sighted tests you don't like, it's the result. No need to make an argument. it's the "lame stream media," Obamacare. People deal with biases all the time. It's often enough to know they exist.
 
Last edited:
It is if someone is deliberately trying to throw the test. If what you're doing, however, is comparing things unsighted, at home, for your own education, not trying to prove anything to anyone, you have no reason to deliberately throw the test. And in that case, it doesn't hold a candle to the brand, prestige, looks and price biases that are engaged the moment you open your eyes. I still think you're really overreaching, John.

Tim

Tim, my comment was not in reference to you. Asking for specific biases which might apply in your case is like trying to get a diagnoses over the phone. Why don't you tell us as much detail as you can about your test(s) & I'm sure I can have a stab at pointing out possible biases. The first one without even knowing your test setup is that you are aware of the devices under test - this in itself brings bias (price, reputation, country of origin, etc. - too many to name & some that might be more specific to you than to others?). The second one is expectation bias. You are probably going to say you have none but what if I said we were going to test solid gold mains plugs on your system? Everybody has bias. It really is just psychology, nobody is immune!!
 
You caught me with my experimenter bias or should I say assumption. :) Reminds me of a story. It was 30 years ago, and we were working on this piece of software that was developed by a university. A few months later we find that one of the two people who developed it had finished his study and was looking for a job. So we, and 10 other companies went chasing him. My boss at the time was pretty clever and was not beyond getting an unfair advantage. We had this guy in the group which was a party animal and into dating scene with girls like you would not believe. So my boss asks him to take the researcher out for a good time after work. The researcher comes and I do my thing technically during the day which went really well. The day ends and the guy takes him out to dinner/party around town. We are thinking this is a done deal now.

The next day we are all anxious to know if the man had sealed the deal. Our guy explains all the places he took him, including the top night clubs and such, yet, he could not get him excited at all. He then says at the end of the report, "he turned to me and asked, 'you do know I am gay, right?'". Oh well. :D Needless to say, he chose to go work somewhere else. Talk about not testing your assumptions!

No Gay Clubs?
 
A comparison between a $700 system (Sony DVD and Berhinger amp) and a $12,000 system (Wadia, Classe, and fancy cables) playing through ATC SCM 12s. 14 of 38 test subjects chose the cheap system, another 14 were undecided, and the remaining 10 subjects selected the expensive one.


Sighted tests are broken because people think they're hearing higher quality when they have knowledge about the product than when they don't.

BlindVsSightedMeanLoudspeakerRatings.png
 
And half didn't know where they were.

The bogus part of the argument is when listeners/reviewers don't like a product from a respected company eg. see HP's reviews in the early TAS of Audio Research. This has been repeated many times. So, so much for that argument. You know, I don't give a rat's ass what the product is as long as it sounds like music. And that means fidelity to early gen 15 ips tapes.
 
So, jasonL, the MatrixhiFi site's stated objective is to debunk esoteric gear! Can you show me a "blind" test done by them where they did find a difference between components? This would demonstrate a number of things!
 
Last edited:

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu