Conclusive "Proof" that higher resolution audio sounds different

John

I would think you are aware this works both ways. re your insistence on trying to find flaws in ABX while conveniently sidestepping the more serious, numerous and known flaws of sighted testings ..

For the record. I don't believe that all DACs sound the same. Having compared a BADA DAC to a multitude of others .. The results are clear enough for me , The BADA is a a consistent winner to my ears.
Agreed, Frantz (OMG :))

I never said I was immune to this cognitive bias & I still hold onto the view that ABX testing is good for revealing some sorts of differences but long term listening has strengths in revealing other differences not amenable to ABX tests. That has always been my point - it may have seemed that I was "trying to find flaws in ABX" simply to dismiss it but I think if you look back over my posts on this - I'm trying to suggest long term listening has strengths in revealing things which I value greater than ABX's strengths.

I was also mostly talking about forum organised blind tests, not the ABX style of tests shown here which are computer proctored (at least this understands that it's a statistical test - blind test get-togethers, usually don't). I still think that certain positive & negative controls should be used for these ABX tests as we don't know the capabilities of the equipment (including ears) of the person doing the test solo, the environment noise, etc.

I have always said that the lack of controls in blind tests was the biggest flaw as it failed to address the "testing of the test" - it's the equivalent of doing measurements with unknown &/or uncalibrated equipment.

The results from blind tests I see organised in forums are simply anecdotes - just the same as sighted subjectivist results & I see no reason to value one set of results greater than another.

When positive ABX results show up, as in Amir's results, these need to be paid attention to (& investigated further) as the skew in all blind tests I see in forums is to return a null result & to find a positive result means this skew has been overcome (so the positive results are even more noteworthy)

Edit: Oh, I did find something further on which I disagree with you, phew :) Apart from the above, I don't agree that "the more serious, numerous and known flaws of sighted testings .." is an accurate statement. In terms of audibility, I believe negative bias (& probably other biases) is just as strong an influence as sightedness. I haven't seen any evidence that evaluates the strengths of various biases in how strongly they affect our perception of audibility.
 
Last edited:
Agreed, Frantz (OMG :))

I never said I was immune to this cognitive bias & I still hold onto the view that ABX testing is good for revealing some sorts of differences but long term listening has strengths in revealing other differences not amenable to ABX tests. That has always been my point - it may have seemed that I was "trying to find flaws in ABX" simply to dismiss it but I think if you look back over my posts on this - I'm trying to suggest long term listening has strengths in revealing things which I value greater than ABX's strengths.

snippage........................

I believe you also commented you held the opinion of the merits of long term listening without evidence. See that is the difference.........evidence that can be obtained in something approaching an objective manner. Throwing out evidence everything becomes possible.

I agree you seem to be going the long way around to nitpick blind testing without owning up to advantages. Blind testing isn't nice, fun or easy. Reminds of a Churchill quote if I may paraphrase it: Blind testing is the worst form of audio listening test....other than all the other forms.

You are taking the same approach with reasons one can't test DACs effectively. You don't have to test all possible variations or answer every conceivable concern. Start with showing any two DACs with flat or identical frequency response and low noise and low distortion can be ABX'd as audibly different. You can go on from there. We know above reasonably well defined limits how much frequency response, distortion and noise become audible. So show if those are below known guidelines something still sounds perceptibly different.
 
I believe you also commented you held the opinion of the merits of long term listening without evidence. See that is the difference.........evidence that can be obtained in something approaching an objective manner. Throwing out evidence everything becomes possible.
You see that is where I believe the mistake is made in this quote of yours "evidence that can be obtained in something approaching an objective manner". My position has always been that blind tests without suitable controls & without dealing with the many influencing factors are no more nor less "objective" than sighted tests.

I agree you seem to be going the long way around to nitpick blind testing without owning up to advantages. Blind testing isn't nice, fun or easy. Reminds of a Churchill quote if I may paraphrase it: Blind testing is the worst form of audio listening test....other than all the other forms.
Properly conducted, blind tests have HUGE advantages - you & others seem to be missing this point? But show me one forum organised blind test which you would consider "Properly conducted"?

You are taking the same approach with reasons one can't test DACs effectively. You don't have to test all possible variations or answer every conceivable concern. Start with showing any two DACs with flat or identical frequency response and low noise and low distortion can be ABX'd as audibly different. You can go on from there. We know above reasonably well defined limits how much frequency response, distortion and noise become audible. So show if those are below known guidelines something still sounds perceptibly different.
I just linked to a Head-fi thread that showed positive ABX results for a number of the recordings made from the output of 6 DACs playing back the same track. Not the best test in the world, I know as it is using an EMU card to record each DAC's output but if it gives positive ABX results which show that (statistically) there are differences between the files recorded, then I guess it is working? Unless someone spots a flaw in the test or can explain the results in some way other than hearing differences in the output of DACs?

So for those not willing to look into that link, let me copy the ABX results here:
foo_abx 1.3.4 report
foobar2000 v1.2.2
2013/02/10 19:13:28

File A: F:\abx\D.flac
File B: F:\abx\E.flac

19:13:28 : Test started.
19:15:31 : 00/01 100.0%
19:15:44 : 00/02 100.0%
19:17:04 : 01/03 87.5%
19:17:24 : 02/04 68.8%
19:18:08 : 03/05 50.0%
19:18:45 : 04/06 34.4%
19:19:46 : 05/07 22.7%
19:27:40 : 06/08 14.5%
19:30:38 : 06/09 25.4%
19:30:59 : 07/10 17.2%
19:32:23 : 08/11 11.3%
19:34:06 : 09/12 7.3%
19:35:38 : 10/13 4.6%
19:41:25 : 11/14 2.9%
19:42:09 : 12/15 1.8%
19:42:51 : Test finished.

----------
Total: 12/15 (1.8%)

And another positive result:
foo_abx 1.3.4 report
foobar2000 v1.1.18
2013/02/04 01:44:35

File A: C:\Users\Michael\Desktop\C.flac
File B: C:\Users\Michael\Desktop\G.flac

01:44:35 : Test started.
01:47:23 : 00/01 100.0%
01:47:33 : 01/02 75.0%
01:47:40 : 02/03 50.0%
01:48:11 : 03/04 31.3%
01:48:27 : 04/05 18.8%
01:49:05 : 05/06 10.9%
01:49:19 : 05/07 22.7%
01:49:56 : 05/08 36.3%
01:50:24 : 06/09 25.4%
01:50:46 : 07/10 17.2%
01:51:03 : 08/11 11.3%
01:51:11 : 09/12 7.3%
01:51:39 : 10/13 4.6%
01:51:54 : 11/14 2.9%
01:52:11 : 12/15 1.8%
01:52:40 : 13/16 1.1%
01:52:47 : Test finished.
Total: 13/16 (1.1%)

Do these tests results not show that there is a very good statistical likelihood that the recorded output from the DAC that produced D.flac is audibly different to the DAC that produced E.flac. Similarly for the other DACs that produced the pair of recorded output files G.flac & C.flac
 
Agreed, Frantz (OMG :))

I never said I was immune to this cognitive bias & I still hold onto the view that ABX testing is good for revealing some sorts of differences but long term listening has strengths in revealing other differences not amenable to ABX tests. That has always been my point - it may have seemed that I was "trying to find flaws in ABX" simply to dismiss it but I think if you look back over my posts on this - I'm trying to suggest long term listening has strengths in revealing things which I value greater than ABX's strengths.

I was also mostly talking about forum organised blind tests, not the ABX style of tests shown here which are computer proctored (at least this understands that it's a statistical test - blind test get-togethers, usually don't). I still think that certain positive & negative controls should be used for these ABX tests as we don't know the capabilities of the equipment (including ears) of the person doing the test solo, the environment noise, etc.

I have always said that the lack of controls in blind tests was the biggest flaw as it failed to address the "testing of the test" - it's the equivalent of doing measurements with unknown &/or uncalibrated equipment.

The results from blind tests I see organised in forums are simply anecdotes - just the same as sighted subjectivist results & I see no reason to value one set of results greater than another.

When positive ABX results show up, as in Amir's results, these need to be paid attention to (& investigated further) as the skew in all blind tests I see in forums is to return a null result & to find a positive result means this skew has been overcome (so the positive results are even more noteworthy)

Edit: Oh, I did find something further on which I disagree with you, phew :) Apart from the above, I don't agree that "the more serious, numerous and known flaws of sighted testings .." is an accurate statement. In terms of audibility, I believe negative bias (& probably other biases) is just as strong an influence as sightedness. I haven't seen any evidence that evaluates the strengths of various biases in how strongly they affect our perception of audibility.

Very good summary, but the main and very true points you have posted will be quickly forgotten.

We must remember that 99.99% of the people in WBF have strong opinions about sound quality and setting up systems, but curiously these 99.99% were never able to describe a proper blind test in which they participated. BTW, I am sure that someone will ask me to show real data to prove it is 99.99% ... ;)
 
45 pages, JK still posting away and everyone insisting that DACs sound different unless he's an objectivist, when they all sound the same.

As I posted about fifteen pages back, JK wouldn't be able to hear the difference between an iPhone and something "audiophile" if he was ABX'd and that'd be the end of the discussion. Some chance! Everyone is dug into too deep to their religion.

However the point that's being missed is that the differences, should they exist, are tiny and only audible to those who know what to listen to and who still have good HF sensitivity. This is because when they become subtle, they're only audible as slight changes to the top end and very low level detail.

In our experience, once they measure about the same as 16 Bit linearity and the cheapest do, people struggle to hear any difference, so unless you're the bloke who has to the remove the paint from his Ferrari because it'll go a bit faster, it's much easier to switch, as most of the market has to headphones, or to a decent pair of speakers.

We make actives, very good ones and they have around 300 times better damping than passives because the amps don't disconnect from the drivers and lose control progressively and because we can use much steeper and more active filters, so they aren't audible. You don't get horrid driver overlap through the critical mid band and the bass doesn't boom. (No boom and no tizz)

We have a single active speaker in the lab, it's crossover is a PC controlled DSP evaluation package, which means we can change filter slopes from 2nd to 4th to 8th order while listening to Pink Noise. We can move the crossover up and down and switch either driver off till we can find the point at which they both sound the same and also to hear how far you have to move away from the intersection before the effect of crossing over is inaudible. About two octaves with 2nd order, heaven knows how bad 1st is, but we're using 4th and need to use 8th for it to be completely inaudible. Driver overlap and lack of control or damping in passive speakers is horrendous. If you lot heard it, you'd never listen to another; you switch to headphones or better actives in a heartbeat. But you're audiophiles and must worry about DACs sounding different. IMO It's like you're rearranging deck chairs on the Titanic. ;)
 
Very good summary, but the main and very true points you have posted will be quickly forgotten.
Perhaps they will be forgotten but I believe this is one of the objectives that Amir has for this thread is that it acts as a placemarker to point to when accosted by the usual post that goes along the lines of "it's not possible because in all these years not a single blind test has shown that X is audibly different to Y" (substitute X & Y with amplifiers, high-res, DACs or whatever)

We must remember that 99.99% of the people in WBF have strong opinions about sound quality and setting up systems, but curiously these 99.99% were never able to describe a proper blind test in which they participated. BTW, I am sure that someone will ask me to show real data to prove it is 99.99% ... ;)
Indeed, & this point, it seems, continues to be rejected!!
 
45 pages, JK still posting away and everyone insisting that DACs sound different unless he's an objectivist, when they all sound the same.

As I posted about fifteen pages back, JK wouldn't be able to hear the difference between an iPhone and something "audiophile" if he was ABX'd and that'd be the end of the discussion. Some chance! Everyone is dug into too deep to their religion.
I won't get into your ad hom again, Ashley

However the point that's being missed is that the differences, should they exist, are tiny and only audible to those who know what to listen to and who still have good HF sensitivity. This is because when they become subtle, they're only audible as slight changes to the top end and very low level detail.
You were already told by Amir that this is not the case as is proven by his results - no particular HF acuity in his hearing according to tests done.

As to the differences being small, which you now seem to be admitting to - we already covered that ground. As was also pointed out by Amir, the name of this forum should give a clue to it's objectives - what's best, doesn't mean what is acceptable to most.
 
(...)

For the record. I don't believe that all DACs sound the same. Having compared a BADA DAC to a multitude of others .. The results are clear enough for me , The BADA is a a consistent winner to my ears.

I have found that my (sighted) preferred DAC depends a lot on the system where it will be inserted, even with the top expensive ones. Was the BADA a consistent winner every time to your ears or to your ears and eyes? ;)
 
You see that is where I believe the mistake is made in this quote of yours "evidence that can be obtained in something approaching an objective manner". My position has always been that blind tests without suitable controls & without dealing with the many influencing factors are no more nor less "objective" than sighted tests.


Is it possible to do a sloppy blind test? Sure, but if for no other reason a blind test done with equal lack of care vs sighted is likely to be better because it removed one influencing factor (and a very large one at that) which is the sightedness. Further as you improve the blind testing parameters, blind tests continue to show differences well past the point where sighted tests are useless as far as we have evidence. Your bottom line position is still one of nitpicking blind testing, favoring long term listening for some things and doing so without evidence. Go collect some evidence.
 
Why one would choose to judge a movie by its trailer versus watching the entire movie is beyond me.
 
I just linked to a Head-fi thread that showed positive ABX results for a number of the recordings made from the output of 6 DACs playing back the same track. Not the best test in the world, I know as it is using an EMU card to record each DAC's output but if it gives positive ABX results which show that (statistically) there are differences between the files recorded, then I guess it is working? Unless someone spots a flaw in the test or can explain the results in some way other than hearing differences in the output of DACs?

Do these tests results not show that there is a very good statistical likelihood that the recorded output from the DAC that produced D.flac is audibly different to the DAC that produced E.flac. Similarly for the other DACs that produced the pair of recorded output files G.flac & C.flac

Yes, the DAC's recorded output sounded different. It was determined in one case there was a level difference of .23 db. Known to be audible. The other I missed looking through the thread specifically, but I believe it was from one device in the test that had audible levels of non-flat frequency response.
 
I believe you also commented you held the opinion of the merits of long term listening without evidence. See that is the difference.........evidence that can be obtained in something approaching an objective manner. Throwing out evidence everything becomes possible.

I agree you seem to be going the long way around to nitpick blind testing without owning up to advantages. Blind testing isn't nice, fun or easy. Reminds of a Churchill quote if I may paraphrase it: Blind testing is the worst form of audio listening test....other than all the other forms.

You are taking the same approach with reasons one can't test DACs effectively. You don't have to test all possible variations or answer every conceivable concern. Start with showing any two DACs with flat or identical frequency response and low noise and low distortion can be ABX'd as audibly different. You can go on from there. We know above reasonably well defined limits how much frequency response, distortion and noise become audible. So show if those are below known guidelines something still sounds perceptibly different.
Going with flat response removes all minimum phase/slow rolloff DACs or those options, meaning your forcing a decision that your focusing on frequency domain and potentially stopband rejection over those with better time domain.
That is just one example why it really is not possible to compare DACs unless you cripple selection-architecture on what may be audible differences beyond FR; as I mentioned tbh there is no ideal filter in real world for audio DACs.
Of course this can be overcome by using true native hirez files (using 96khz) that then means minimum phase/slow rolloff is technically flat up to around 30khz in many instances but not all, but then this raises my point on validating the hirez files.

This type of DAC test is not easy to do tbh, because as I mentioned your either going to be accused of skewing the results or crippling the results with a wrong scope-context.
Edit:
Actually still looking at some of the minimum phase type filters there are still some implemented that would not comply with flat up to 20khz even with 96khz sampled data.
Cheers
Orb
 
Last edited:
Is it possible to do a sloppy blind test?
Hold on you should rephrase this to "is it possible to do a proper blind test" & then show one such properly conducted blind tests organised on this or any other audio forum (baring these ABX tests)
Sure, but if for no other reason a blind test done with equal lack of care vs sighted is likely to be better because it removed one influencing factor (and a very large one at that) which is the sightedness.
But you keep telling us that sightedness is such a strong bias yet you present no evidence for this. I would contend that sightedness is just a form of expectation bias & this is not exclusively limited to sightedness. Negative expectation bias has just as strong if not stronger effect on our perceptions. This & other biases/variables are some of the reasons why internal positive & negative controls are needed in blind tests. The whole point is that we are not rational thought machines. To be able to measure our perceptions somewhat accurately, we have to exercise control over all the many factors that are known to influence our auditory perception. Just dealing with one of these factors seems to me to be self delusion - I don't believe that there is a linear relationship between influencing factors & perception. AFAIK, you can't say that removing X influence improves our auditory perception by Y amount unless you have some evidence that shows this? You know, the list of how to do correct blind tests is not a sliding scale of correctness - meet 3 of the 12 requirements & your test is 25% more accurate/correct, meet 6 of the 12 & it's 50% - NO, it's all or nothing, pass or fail, a correct test that some conclusions can be drawn from or an incorrect one that NO conclusions can be drawn from
Further as you improve the blind testing parameters, blind tests continue to show differences well past the point where sighted tests are useless as far as we have evidence.
Yes well, if you show us this evidence, I'm sure it will go a long way towards making your case?
Your bottom line position is still one of nitpicking blind testing, favoring long term listening for some things and doing so without evidence. Go collect some evidence.
It's my opinion based on my experience. What I have is anecdotal evidence based on my experience but I haven't seen any evidence for your contrary opinion.
 
Last edited:
Yes, the DAC's recorded output sounded different. It was determined in one case there was a level difference of .23 db. Known to be audible.
Yes, it seems that " iPod is louder than the original by 0.23 dB on the right channel, and by 0.07 dB on the left channel." But this difference seems to be from the iPod's output & not a fault of the recording so QED there are measurable & audible differences between these DACs. Please don't selectively quote short sections of a post & then use it to try to support your point (incorrectly, as it happens) - there were other differences measured between the DAC's outputs - some reckoned to be audible & some probably not - . These differences are:
"Clip+ has a more rolled-off top octave, and a very slight FR ripple."
"Galaxy Nexus - relatively large and obvious FR errors, perhaps the easiest file to identify"
"ODAC - there is a slow high frequency roll-off that begins quite early. It also does not have as bad pitch accuracy as the various portable players."​
And his conclusions on these measurements
"Some interesting observations:
- not all files seem to be accurately level matched (some DAPs in particular are slightly louder than the source file)
- portable players can have a pitch error as high as 500 ppm, however, at less than 1 cent, this is still not audible
- the E-Mu 0204 used for the recording apparently has a slight low frequency roll-off on the left channel, but not on the right channel; this is odd, even if not necessarily audible
- one of the devices inverts the phase of the output; I do not tell which one, because it would spoil (partly) the test"​

The other I missed looking through the thread specifically, but I believe it was from one device in the test that had audible levels of non-flat frequency response.
But again is your point that it is the DAC doing this or something else? If it's the DAC then yes they are different, QED, as I said, already. If not the DAC what is your explanation of these differences between DACs?
 
Last edited:
Yes, it seems that " iPod is louder than the original by 0.23 dB on the right channel, and by 0.07 dB on the left channel." But this difference seems to be from the iPod's output & not a fault of the recording so QED there are measurable & audible differences between these DACs. Please don't selectively quote short sections of a post & then use it to try to support your point (incorrectly, as it happens) - there were other differences measured between the DAC's outputs - some reckoned to be audible & some probably not - . These differences are:
"Clip+ has a more rolled-off top octave, and a very slight FR ripple."
"Galaxy Nexus - relatively large and obvious FR errors, perhaps the easiest file to identify"
"ODAC - there is a slow high frequency roll-off that begins quite early. It also does not have as bad pitch accuracy as the various portable players."​
And his conclusions on these measurements
"Some interesting observations:
- not all files seem to be accurately level matched (some DAPs in particular are slightly louder than the source file)
- portable players can have a pitch error as high as 500 ppm, however, at less than 1 cent, this is still not audible
- the E-Mu 0204 used for the recording apparently has a slight low frequency roll-off on the left channel, but not on the right channel; this is odd, even if not necessarily audible
- one of the devices inverts the phase of the output; I do not tell which one, because it would spoil (partly) the test"​

But again is your point that it is the DAC doing this or something else? If it's the DAC then yes they are different, QED, as I said, already. If not the DAC what is your explanation of these differences between DACs?

This is becoming somewhat entertaining to see your frenzied reaction. Sorry for the stress as that was not my intention.

I don't think level differences being audible is big news. Nor large enough frequency response differences. If that is it, then not much to see here.
 
This is becoming somewhat entertaining to see your frenzied reaction. Sorry for the stress as that was not my intention.

I don't think level differences being audible is big news. Nor large enough frequency response differences. If that is it, then not much to see here.

No, of course not - level, frequency or timing differences between DACs are not big news to many here. Most of us already knew this from our long term listening experiences too so it's doubly unsurprising to us. You may know that all DACs don't sound the same so, if that's the case, I wouldn't expect it to be of any surprise to you either!

There are some, however, who maintain that all DACs sound the same & to these it may be the only evidence acceptable. I posted these ABX results here simply in reply to Maxflinn's question in post 435 "So has anyone ever successfully differentiated two (or more) DACs in a blind test before?"
 
But you keep telling us that sightedness is such a strong bias yet you present no evidence for this. I would contend that sightedness is just a form of expectation bias & this is not exclusively limited to sightedness. Negative expectation bias has just as strong if not stronger effect on our perceptions.

You do understand, I hope that "sighted doesn't mean the participants have the ability to see? Removing "sight," the knowledge of what resolution, cable, component, etc. is playing when, removes a whole collection of possible biases, both negative and positive. When you don't know which DAC you're listening to when, you don't know who designed it, what brand is on it, what kind of reputation it has in the audiophile community, how many boxes of what girth or weight or beauty it is contained in, whether it has blue lights, green lights or no lights at all and if they are bouncing and changing and indicating something.

Do you need evidence that removing all these potential biases will yield more objective results? Really? You don't think human opinion is influenced by look, feel, reputation and peer opinion? This is exactly the kind of thing I'm referring to, John, when I say just a bit of logic will do.

To be able to measure our perceptions somewhat accurately, we have to exercise control over all the many factors that are known to influence our auditory perception.

I just named seven powerful sources of bias that are removed when knowledge (sight) is removed, without any complex controls and protocols required. Can you name seven potential sources of bias that are removed by adding knowledge (sight)?

Just dealing with one of these factors seems to me to be self delusion

Well, given that removing sight removes a whole collection of biases, even without controls, you can stop worrying about that one.

Tim
 
You are taking the same approach with reasons one can't test DACs effectively. You don't have to test all possible variations or answer every conceivable concern. Start with showing any two DACs with flat or identical frequency response and low noise and low distortion can be ABX'd as audibly different. You can go on from there. We know above reasonably well defined limits how much frequency response, distortion and noise become audible. So show if those are below known guidelines something still sounds perceptibly different.
I know this argument - after all the further stipulations necessary to make your statement work, it boils down to the shortened version "all competently designed DACs sound the same".
 
I know this argument - after all the further stipulations necessary to make your statement work, it boils down to the shortened version "all competently designed DACs sound the same".

BINGO!


If flat, low distortion response and adequately low noise is maintained they should. If they differ in those, then we know at least one reason they sound different having nothing to do with any esoteric method of operation.
 
You do understand, I hope that "sighted doesn't mean the participants have the ability to see? Removing "sight," the knowledge of what resolution, cable, component, etc. is playing when, removes a whole collection of possible biases, both negative and positive. When you don't know which DAC you're listening to when, you don't know who designed it, what brand is on it, what kind of reputation it has in the audiophile community, how many boxes of what girth or weight or beauty it is contained in, whether it has blue lights, green lights or no lights at all and if they are bouncing and changing and indicating something.
Yes, of course but why stop there? When you don't know what you are testing transport, PC, DACs, amps, cables, speakers, then you remove more possible biases, right?

Do you need evidence that removing all these potential biases will yield more objective results? Really?
You are simplifying & incorrectly stating what I posted & then objecting to it. I said that it's all or nothing - remove all the biases before you even dream of calling it an objective result. There is no such thing as a "more objective" result. A result is either objective or it isn't
You don't think human opinion is influenced by look, feel, reputation and peer opinion? This is exactly the kind of thing I'm referring to, John, when I say just a bit of logic will do.
Yes, but you didn't notice the bias that still remained - the one that I just pointed out above. So when running your blind test you would have left this bias in place. Without internal controls this would nt have been picked up so the guy who thinks all DACs sound the same will not hear any differences if he knows it's DACs that are being tested. However, tell him it's speakers that are being tested & he may well hear differences? Which is the correct result? So, If I know what I'm testing blind I still bring knowledge to the table & preconceived notions about the test.
I don't know why there seems to be a great reluctance to accept J_J's listing of some parameter's necessary to conduct valid blind tests? It seems either people don't understand that this is NOT a menu from which to chose the parameters to implement - it is the checklist needed to validate the test.


I just named seven powerful sources of bias that are removed when knowledge (sight) is removed, without any complex controls and protocols required. Can you name seven potential sources of bias that are removed by adding knowledge (sight)?

Well, given that removing sight removes a whole collection of biases, even without controls, you can stop worrying about that one.

Tim
Tim, I really don't know what you mean here but I think I've already covered it above - it's not a menu to choose from - it's a checklist for validity
 
Last edited:

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu