Well, Tim, the first step is knowing what the accepted standard set of controls are for reliable tests - something that was not demonstrated on this thread & is not typically known among the audiophiles who run blind tests. So their ignorance often results in an arrogance about the veracity of their results.
Indeed, it appeared to me that you yourself was not aware of BS1116 - so answer me honestly, Tim were you & the standards within it?
Never heard of them before. But Ive never run a blind listening test for the purpose of proving anything to any one but myself, so I've never needed to look into telecommunications ABX testing methodologies.
Sure Tim, the argument can continue over the weighting of each of the biases (& hence controls)
No need. I think we're about done here.
but that was the point I was making before - how do you know what influence the remaining biases have on the result? Without knowing this you are left with an unreliable result. It seemed to me that you & others are blinded by the bias of knowingness & consider it (& maybe a few others) as the primary controls for a reliable result.
Or maybe not. You still seem to be trying to invalidate anything that doesn't include everything. Even though there is no agreement on what constitutes everything. You're having a lot of trouble letting this one go, john.
I beg to differ & ask what is your evidence/reasons for stopping at these small number of bias controls? The phrase I hear most often about blind tests is - blind tests are about bias removal - a patently untrue statement as it stops with pretty much a few biases & doesn't consider what new factors which can bias the result have been introduced by the test itself
I've never defined a stopping point, John, or identified a small number of controls. Nor have I said blind tests are all about bias removal. I don't think anyone has said that, actually. The blind part is, of course, about avoiding bias, but it indeed takes more than lack of knowledge to make a test.
I suggested a simple inclusion of positive & negative controls in blind tests
.
Perhaps you did, but that's not what you and I have been debating. What we've been debating is so simple, and I've,stated it so many times at this point that it's amazing that you're still arguing sidebars. You stated, unapologetically, unambiguously...somewhere in the mid hundreds by post count here
...that without
all the controls, and I believe at that time you were referring to JJ's summary of BS1116, an unsighted test was no better than
no controls. You seemed to have backed off of that hard line of unreasoning, thankfully...or maybe not.
This has been suggested, not just by me, for a long time. Why do we never see such internal controls in any blind tests - it's relatively easy to implement?
Product development, marketing, pharma, and many other fields make extensive use controls in blind studies. I'm sure Olive and Toole used controls in their studies at Canada's national research center and at Harman. Are you talking about hobbyists self-testing and reporting back on Internet forums? We can agree that those "tests," including the ones that started this thread, are interesting, but anecdotal.
Tim