There isn't anything to prove with this example. And again, as I previously stated, don't forget the basics. Blind testing doesn't *prove* anything.
It is not me who has to remember that. I have it tattooed on my forehead
. It is folks who think that one or two blind tests generates data for all people, all content, and all equipment. Sometimes this is true, but we lack the data as you say to claim that with conviction.
IOW, since we can't always blind test, it is OK to rely on sighted evaluations. In my book that is a cop out. In your recent discussion with Arny and A.J., amongst others, you stated you were in their camp. I don't think so. Instead, I see you playing it extremely close to the vest.
You can't chop down half of my statement and then run with it Ron. It is like me saying you are a great lawyer but wear cheap cloths. Take out the first and where would that leave you?
I said we use three things:
1. Measurements. We always measure. Measurement may give us too much date or too accurate of data but it is wonderful in how fast and objective it is.
2. We use trained/expert listeners. These are people are paid to be right. They don't have a job otherwise. It is like a test engineer in a company. He is not paid to say everything works right. He is paid to be critical. Yes, trained listeners can be wrong and catastrophically so at times. But overall, they are far more right than wrong.
3. Blind testing. This is to guard against the above misses for trained listeners and also to gauge the general public. To give a video example, when we developed our video codec, we opted to have higher resolution with slightly more artifacts than the other way around. General surveys showed the latter is preferred for everyday people as opposed to experts.
The above is what everyone uses. Let me say it again. That is how the real world works. You can wish that the world runs on double blind tests but it doesn't. Those tests are too slow to run, take too long, and at the end, have their limitations due to test fixtures, budget, time, etc.
If a trained speaker evaluator came and told me XYZ speaker has distorted high frequencies, I put high value on it. I will not dismiss it out of hand because he didn't run his test blind.
Now, if random Joe showed up here and said the same, I would take it for what it is worth. I would pencil it and go and test it to see if he is right or wrong.
Let's take cables as an example. The audiophile world would love to see reliable, repeatable blind testing by Nordost, Transparent, etc., showing purported proof of a difference. As my friend dizzman repeatedly states, if any of these manufacturers, for example, could point to even one such result reliable, repeatable positive finding, that would be a fantastic data point, would lead to even more sales, and quiet this on-going debate.
Isn't that the topic of this thread? The differences don't get any smaller than cables. So we need to settle the current debate before we go and ask them to run it.
BTW, have you run a cable test? It is not too hard. Find a source with dual output and run them to a pre-amp and then compare. Level matching is not necessary because if there is a level difference, the game is already over
. If you get positive results, swap the cables, randomize and try again. If you are worried about a sample of one, invite others to the test.
Absent this data, however, the audiophile world is left with "trust your ears" which implicitly means "don't trust your ears" and since there purportedly is nothing absolute in audio then for some faith trumps science.
Why are you arguing with this point of view with me Ron? Does it look like I don't examine the design of a product? Its measurements? Who built it and what evaluations they did? At my company we carry only two lines of speakers: from Harman group and Paradigm. Both believe and conduct blind testing together with the other two tools I mentioned.
What I am hearing from you then is don't tell them subjective evaluation is good for anything. Well, i can't do that. International standards such as AAC audio codec were developed using tons of subjective and sighted evaluations. You name a company that designs their products 100% using blind testing and nothing else and I will change my position. Until the, I can't push the data to where it can't be pushed.
Second, there is significant room for the end user to run blind testing. Its value definitely is not limited to manufacturers. While the end user may not be able to blind test speakers, he/she certainly can do this for other products.
I love to see a thread then that objectivists show and outline all the blind tests they have run. Let's see how much they can stand the scrutiny
. When I outlined a blind test to see if turning off the video circuits on an AVR or source can make a difference, these are the comments I received from the objectivist camp in another forum:
Me: "Here is a way to experiment with this idea. Does your AVR have a button to turn off the video circuits and front panel display? If so, hook up a CD/DVD player using S/PDIF coax cable. Play something quiet with lots of ambiance. Now turn up the volume good and loud (or else use headphones). Play it with the video and front panel circuits on (on both the source and AVR) and then turn them all off. Do you hear a difference?"
Poster: "This hardly looks like a proper listening test to me. If our correspondent is such an advocate of DBTs, why is he advocating such a crude sighted evaluation?"
Me: "I always think of cheap and simple exercises people can run at home to learn more about their equipment and the limits of their hearing. "
Poster:"Your cheap and simple exercise is a sales pitch in disguise. By giving you the benefit of the doubt and presuming that you simply don't know any better, I'll spare the other readers the ugliness of questions about your character."
Me: "Sales pitch for what? I asked them to test their own gear. There is nothing to buy. It is very likely that most of them don't hear any difference which is fine in my book."
Poster: "Most front panel on/off switches have discernable on and off positions. Thus the proposed experience is likely to not be the least bit blind in actuality."
Me: " The front panel switches are momentary toggle switch. You push them, they do something. You push it again, it does the reverse. There is no mechanical feel to it as it is sampled by the microprocessor and acted upon. Are you thinking of 70s hi-fi gear?
But yes, if you can cheat by feeling the button, don't do that. You don't want to cheat yourself into believing something.
"
Poster: "No, its a highly flawed evaluation and one that has a predictable outcome - peole are falsly convinced of your mystical claims."
Me: "I suggested someone do blind test and you are unhappy with me?
Anyway , you should not be concerned. Most people won't hear a difference and will actually support your view. For select few, they will have an interesting realization. "
Poster: "If family and friends are involved but visible, then you have a single blind test, which is simply a flawed double blind test."
Me: "Which is miles ahead of no test which is what you are offering. And let me break the news: ALL audio tests are flawed. The good ones are simply less flawed than others. Period. Do we throw out everything they find as a result of it? I hope not. That would leave us with nothing. What we do then is to examine the test and decide how valid it is for us and what out of it can be used. "
Another poster: "Sorry, I don't see anything that makes the data gathered in such way more meaningful than random chatter seen all over the net. They are both subjective observations."
Me: "Let me understand this. You run your own randomized blind set of trials at home. You use your own ears to see if you can identify if turning off some of the circuits in your equipment makes a difference. Let' say you do and in all three instances, you find out that you prefer the circuits off. You can't possibly tell me you are not smarter on that day than you were the day before about such things .
Let me say that if you had run such a test and post here, my respect for anything you would say would go up incredibly even if it disputed the position I was taking. Indeed, my top goal in these discussions is to get people to go and learn the science and experiment. It is sharing of that data which moves our collective knowledge forward. Otherwise, it is a waste of time and words."
Poster: "You don't push toggle switches. You push push-button switches. You're not telling a consistent story....I see the bigger picture where you advise against doing proper listening tests, and have been making false claims all along about your own alleged blind tests. "
Me: "I have asked three times and I will ask again: in what way a user is harmed with blind testing of his own equipment."
Another poster: "This is not a proper test. To correct this problem, the choice presented to the test subject must be randomized, so they are comparing X with Y at each trial. But they don't know whether X is "processing on" or "processing off", and this relationship must be randomized for each trial. Then the number of trials can be chosen so the probability of picking "processing on" or "processing off" by guessing is negligibly small, not 0.5.
The problem this causes though, is the creation of a false impression that a given conclusion was reached by a properly designed experiment. I've seen people's posts where they claim to be able to distinguish two subtly different configurations in a blind test, but upon further examination one finds the probability of reaching this conclusion by guessing is 0.5, not negligibly small. So the potential harm of this approach is the spreading of misinformation under a guise of "scientific inquiry" - a common technique used by high-end audio vendors."
Me: "I am proposing textbook blind AB testing. Nothing more, nothing less. Listener is unaware whether they are listening to the system as is, or with the modification (video turned off).
What is the #1 reason we use blind testing? To remove experimenter bias. "I know video circuits do extra stuff so they must by definition worsen the sound." So I convince myself that when I hit the button, all of a sudden the sound gets better, the highs clearer, the mids engaging, the soundstage widens, and bass gets tighter .
Blind AB tests reduce the impact of such bias substantially. You don't know which state you are listening to so it is hard to try to cheat. Don't believe me? Set up an AB test and tell me how you managed to cook up the results so that it comes out the way you intended.
The choices are randomized in blind AB tests just the same. There is no hidden reference so we lose value in that, but randomization is there. I suggested multiple tries just the same.
There is no misinformation spread. The testing is done by a person for their own use. I have said this repeatedly and it keeps getting ignored. I was abundantly clear that you cannot take your results and expect to get published in AES or change the direction of these debates.
Once more, this was not put forth as the preferred scientific method a product should be evaluated. It was put forth as a method for members here, to decide for themselves if it is at all probable that turning off these circuits can make a difference. No more."
Me: "You do the blind test and time after time, no matter how many times you repeat it, the one that you like is with the circuit off. Question: which way would you leave your equipment? With the circuit off or on?
Second scenario: your car manual says that you put premium gas in it. You one day decide to put regular in it. You drive and it and it feels exactly the same. You do that 10 times in a row. Assuming I guarantee you that there is no damage to your car (and buy it from you if there were), will you pay extra for premium fuel or go regular? Crossing fingers that the proverbial car analogy doesn't come back to hunt me."
Another Poster: "Well, this is a different question. The earlier discussion centered around whether the test was "proper" (my words, maybe "valid" is a better choice), and this question relates to whether one would use the results of an invalid experiment anyway, knowing full well it was invalid. I plan on using an HT pre/pro for an audio-only system (just for bass management and possibly room correction), and I will surely turn video processing off if I have that option, just in case it might make a difference. But that is my own personal choice, and I would not try to make claims about its effect in a forum." [Car analogy not answered]
Me: "I am not asking if you would guess to get there. I asked, if you ran the experiment as suggested, would you be more inclined to follow it. You are taking the test away and then say you might do it. I didn't ask you about that.
But it is interesting that based on even less information than having run the test, you would follow that technique.
It seems to me there is such a fear that people would go and run the test, and post their outcome here. Why? How is that any different than random assertions that all modern gear sounds the same? How valid was that for Anry to say? "
Another poster: "The test is bogus, so I would ignore the results. You can't have less than no information."
Me: "You can certainly do that. Here is how I view it. All else being equal, I like to have my front panel and video circuits on. The former lets me see what the device is doing and the latter, lets me feel good that when watching say, a music video, my audio is not degraded base on existence of video.
So I perform the above test. I ran it on a number of SACD and CD players and found it to make a difference. So when using them as pure audio source, I turn off video and front panel. Some are nice in that they turn on their display when using the remote. That's nice. Others don't and I put up with it.
I also tested two high-end DACs and couldn't tell the difference between the font panel on and off. So I leave them on and can tell what they are doing as a result. This is good for example to confirm the sampling rate of the music you are listening through the PC.
So what you call less than no information, I find very useful in what I do with my hobby. I realize I can't convince you and that is cool. I write for everyone, even though I am just answering you
."
I will stop here
. See how much fighting was put forth to not bother with a simple blind test at home by the objectivist camp? You want to say I don't belong in that camp? Nothing would please me more than not being associated with folks who say the above.
There can be no doubt an end user can perform blind testing for cables unless one's faith is to dismiss science.
And what if they find a difference as I did above? What was the reaction?
I asked you to show me a method that was more reliable and you responded with only occasional blind testing. As I stated before, bias does not go on holiday. Neither does self-delusion in thinking one is immune.
As I noted, measurements do take a holiday from bias. And trained listeners do that most of the time.