I am sure I am the rare exception . But yes, I did select just about every component I care about using blind testing. I went as far as dragging my current gear to the dealer, had him switch things back and forth for me while I listened!So I throw the question back. Given there is no perfect component out there, why did any of you choose what you have now? Did any of us blind fold ourselves in the dealer's shop? More importantly, should we? How important is actively participating in a DBT shoot out to you when you select a component for your own use?
It would be if as a result of these discussions, someone, someplace, decides to try the other guy's scheme . That is the only way we can truly learn the other guy's argument.I'll give my answer now. It means Squat to me.
Betting or wagering of any parts of your anatomy are strictly prohibited.
@Ron
Of course not. Neither does Dr. Olive dismiss the backgrounds of his test subjects either. In fact it IS extremely relevant when he mines his data.
Now suppose for a minite that you are a manufacturer and that if you ARE targeting folks of a particular aesthetic bias then that bias is a plus not a minus. We forget these are businesses not academic institutions. Their reason for being is to make us happy, not enlighten us. Leave that to the Universtities. It would be a mistake to expect an artisanal company trying to find a niche to keep his doors open to stop trying to zero in on his client base's hot buttons.
Thanks for the thoughtful reply. If you're interested, let's keep the conversation going and maybe Dr. Olive will join in, seeing as though his integrity is being called into question.
Perhaps I'm mistaken, but as I read your post you were not scrutinizing the bias of his test subjects but, instead, you were at a minimum implying Dr. Olive himself is biased and/or has a conflict of interest because he is employed by a manufacturer of certain audio products.
I thought I'd toweled off but into the pool I go again!
DBTs are useful but I really believe they are rough tests. Very rough tests. They point out GROSS differences.
That is certainly not the case if the listeners are carefully selected and trained based on their hearing and ability to discriminate sound quality differences in a consistent fashion. We've developed listener training software to teach listeners how to identify and rate different types of distortions in audio, which gives us performance metrics on how discriminating and consistent they are. Each listener has an overall Listening IQ score that tells us who the best listeners are. We also statistically analyze and monitor their performance in actual product evaluations, and if they start to slip, we check their hearing and send them back for more training.
There is room for both but in my opinion the importance of any ABX/DBT on the finalization of a product pales in comparison to that of evaluations done by a trained panel.
I agree, and that is exactly what we do. We use expert listeners to evaluate the prototypes. My experience so far indicates that if the experts approve of the product in terms of its sound quality, the untrained listeners will love it (check out the graph comparing loudspeaker preferences of trained versus untrained listeners.
Take Harman for example. What good would it to for say Kevin to do a no-holds barred assault on loudspeaker design meant to cater to the most discerning Harman clients then change the design based on the results of a DBT using folks off the street? It would be fine for an entry level JBL I'm sure but a Halo product? I think not. I'll bet my left nut that in such a situation a panel had been assembled and employed and that DBT respondents would be carefully selected.
You're exactly correct!
does that introduce bias by selecting the type of panel assembled
That was our experience at Microsoft also. Here is what we found: Expert listeners managed to outdo most listeners including majority of audiophiles. They could more consistently pick the control (i.e. saying there was no difference when there wasn't) and better hear differences when there were any. Only a small percentage of the general testers (which again included audiophiles) could match them. I applauded the few people who were "gifted" to have the same level of hearing ability without any of the training of the experts.So I've brought in different groups of listeners from outside Harman who aren't trained and run them through the same tests. Lo and behold, these untrained naive listeners pick the same loudspeakers those preferred by the trained panel. The difference is that the ratings from the trained listeners tend to be higher overall on the preference scale, and there is more noise/inconsistency in the ratings.
To achieve the same statistical confidence you need about 100-300 consumers compared to 12 trained listeners. A 300 panel of consumers can cost up to $100k. If you can extrapolate the ratings of the trained listeners to the market segments you save a lot of time and money.
I would add other mid-level Japanese companies to that list when building low-end products. They would for example, test a boombox in their target market (college kids in a dorm) at large scale and even had targeted sound curves for different markets.Not all companies believe that trained listeners have the same preferences as untrained listeners, Bang and Olfusen being one of them...
Agree and the only counterexample I have is for video. With VC-1, we all had a preference to have sharper images than soft and blurry. Unfortunately at lower bit rates and extremely high compression artifacts, good number of people would take soft and blurry over a sharper picture with slightly more artifacts. So we lost fair number of benchmarks against MPEG-4 AVC at Internet rates. Fortunately, the very factor helped us win a decisive victory in codecs for High Definition video (HD DVD and BD) where quality did matter . At the end, we made the scheme adaptive but still favored sharpness which cost us some business in the general Internet.I don't believe QDA is necessary for evaluation of audio components if your philosophy as an audio manufacturer is to accurately reproduce the art (the recording or musical event) as closely as possible...
Jack
In the absolute ,we aren't sure of anything ... We do however have a frame of reference . We use Logic and if we were not to agree on it .. End of Discussion.. End of a lot of things. So it is our Frame of Reference.
Nowhere but in High End Audio have I seen this tendency toward questioning Science for , after all, an endeavor which relies son science and technology as much as Music reproduction .. Science is at the very basis of our Audio system, yet we would like to refute it the second it infirm some of our observations.
I have often reminded of an anecdote about Einstein boiling his eggs, he most likely thought of it as the rest of us .. 10 minutes hard 2 mins soft ... Same for us ...I for one didn't choose any of my components in BT.. I choose based on how they sounded to me ... When a cable cost the same as an apt in NYC (not yet but we are getting there ) and the manufacturer claims "Quantum Tunneling" our perceptions and the claims must be tested and thoroughly. We should open ourselves to the real notion that our senses are not objective and that they can be fooled but also trained toward objectivity, a separate discussion by the way ...
So I construct your argument, no offense intended, as nihilist in its essence .. We certainly don't know it all but we are able to know more to move forward and through the judicious use of science and when it is lacking , to use our properly trained senses to arrive at better audio systems ...
Frantz
Myles
Again we go around. The point is as simple as this: Why when some knowledge is removed our "perceptions" change? THis bias needs to be removed and that is scientific .. Science doesn't profess to know everything , but it aspires to know continuously and it will be worng some times .. Adipose cells
Myles
Again we go around. The point is as simple as this: Why when some knowledge is removed our "perceptions" change? THis bias needs to be removed and that is scientific .. Science doesn't profess to know everything , but it aspires to know continuously and it will be worng some times .. Adipose cells
To answer the first part I simply stated that the background of his test subjects is very important. He did select students from a music school in an example from another thread. He also groups respondents according to different criteria age, sex, etc. This is normal and if he DID NOT then perhaps one could say he was not doing his job. What YOU seemed to be leading to was that a trained or screened panel was useless anyway since THEY have biases.
.Thus we come to the crux of the matter and why it is squat to me as a consumer:
If I were to pass the selection of my equipment to a bunch of strangers under blind test who have absolutely no idea of what I like and what I don't like, what do you think the likelihood is that they will end up choosing something I like? I just don't get it. We thumb our noses down at reviewer apostles because they buy on the strength of someone else's recommendations rather than their own trust in themselves but you expect us to put these very personal decisions in the hands of others under the cloak of a scientific procedure? I am a consumer. It's my choice and no one else's. If I were a builder looking for a range of acceptable performance for as wide or narrow a base as possible then yes I would use DBTs especially in the product prototype assessment phase but I am not trying to please the folks in any given range, I'm pleasing myself because it is I who will be using this by myself 80% of the time
Myles, reading your post it seems that you believe in some amount of science in your field of work when it comes to measuring results. Do you hold the same view in audio testing that blind testing can generate useful result at least some of the time?