Objectivists, Harman Testing, Reviewers, and Reality

MylesBAstor

Well-Known Member
Apr 20, 2010
11,236
81
1,725
New York City
The reality I am referring to is nothing more than the LISTENING reality, and again, no panel I have ever heard or owned sounds like that. I also use (and post) my own measurements, but at the end of the day, reality to me is NOT graphs and measurements, and that is the basis of our differences; measurements help me figure out balance, they serve as the BASIS, and nothing more than that. The fact you rated the ML a '4' is simply telling of your preferences, and that is reality for you - nothing wrong with that.

I never denigrated anything or anyone - and certainly "screwed up" are your words not mine. Are we at polar opposites? Absolutely. Does anything you have written on this subject mean anything to me? Very little. You espouse "research" that comes across so thin, that no room for convergence exists between us, and we should leave it at that. But no, I don't think you are "screwed up", but I am indeed unimpressed and I do feel we hear completely differently, and feel free to do the same of me. I decided to use "intense" language, because of the fortitude of your positions, and I make no apologies.

I have spent 13 years now with my current MLs (and 6 more before them with other MLs, plus 20+ years with Magnepans in my second, lesser system), and many an amp have come and gone - and apart from the 400RS no other has been able to drive the MLs correctly, though the same amps have had no issues driving much lighter loads. I could go into details about those amps' sound driving difficult loads... I have also posted in the past that all these by-gone amps have also shut down when driving even more harsher loads like the ML Summit and Summit X (impedance drop to ~0.7 ohms), at moderately high levels. So do I think Proceed amps are unworthy of listening comparisons at Harman or elsewhere, if we are to call this "research", where all parameters are supposed to be optimized? Absolutely, and feel free to disagree; but I remain unimpressed with Harman and anything written about their work. I take their "research" as yet another data point, and I throw it out the window. I expect you to do the same of what I just said.

@esldude: Just click my system link on the bottom to see the current graphs (post #1); 1/3 octave, and I intend to get higher "resolution" at some point

Problem is YOU need God's chosen speaker.
 

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
You guys keep coming back to the weakness and/or incompleteness of measurements, your same old argument. You're trying to look the other way, ignore what's different here -- measurements repeatedly correlated with preference, in dramatic numbers, over many years of well-controlled listening tests. It blows a huge hole in the subjectivists' last line of defense (trust your ears, measurements don't matter) and probably many more sacred cows than we currently imagine. I'm not sure what else you could do, though. It's either denial or change your minds. And we know that's not going to happen.

Tim
 
Last edited:

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
Seriously. Can I read your reviews online somewhere?
 

microstrip

VIP/Donor
May 30, 2010
20,806
4,698
2,790
Portugal
You guys keep coming back to the weakness and/or incompleteness of measurements, your same old argument. You're trying to look the other way, ignore what's different here -- measurements repeatedly correlated with preference, in dramatic numbers, over many years of well-controlled listening tests. It blows a huge hole in the subjectivists' last line of defense (trust your ears, measurements don't matter) and probably many more sacred cows than we currently imagine. I'm not sure what else you could do, though. It's either denial or change your minds. And we know that's not going to happen.

Tim

Tim,

It seems you misunderstand the main point. The measurements were correlated with some preferences - given by listening conditions, recordings, relevance of the subjects chosen to be analyzed, even by the organization of the scorecards people had to fill. The question is not in the science involved in the correlation or the measurements - Harman experts know about their business! The big problem is that the areas of preference of many audiophiles are not given enough weight in this brilliant work. It why they are treated by some objectivists as poetry and other very unkind, even insulting terms. IMHO they wisely want to devaluate these aspects as they can not include them in their work.

Another issue is that we are mainly shown selected summaries and marketing pictures pictures from the tip of the iceberg. F. Toole is much more than Harman, and there are many other scholars and knowledge people in the audio business - B&W had been showing pictures of the giant anechoic chamber they used in their advertisements of the 70's.

BTW, did you guess who keeps the copyright about my recent quotes on audiophile cables and active speakers?
 

Alan Sircom

[Industry Expert]/Member Sponsor
Aug 11, 2010
302
17
363
You guys keep coming back to the weakness and/or incompleteness of measurements, your same old argument. You're trying to look the other way, ignore what's different here -- measurements repeatedly correlated with preference, in dramatic numbers, over many years of well-controlled listening tests. It blows a huge hole in the subjectivists' last line of defense (trust your ears, measurements don't matter) and probably many more sacred cows than we currently imagine. I'm not sure what else you could do, though. It's either denial or change your minds. And we know that's not going to happen.

Tim


Actually, I go a lot deeper. Whether measurement gives us the complete picture or not is immaterial. Either side can read a review in Stereophile and apply their own filters to avoid what they consider 'filler'.

I question why forcing measurement as the primary determinant in the purchase of audio products on end users who are uninterested in measurement is a 'good thing'. We already scare people off because the subject is 'too technical', so it strikes me that including measurements that no lay person can understand without doing a lot of background research would only serve to drive yet more people away from the hobby.
 

amirm

Banned
Apr 2, 2010
15,813
37
0
Seattle, WA
Tim,

It seems you misunderstand the main point. The measurements were correlated with some preferences - given by listening conditions, recordings, relevance of the subjects chosen to be analyzed, even by the organization of the scorecards people had to fill.
We have parsed each one of these points and the only argument left is disbelief. It is not like any counter evidence has been put forward to justify the hypothesis you all have in the invalidity of these results. Here is one of them again:



These are different groups of people taking the test including the topic at hand, trained listeners and reviewers. Tell me why their ranking of loudspeakers matched that of everyone else. Please give me data. Don't just keep saying it can't be. That is not an argument.

On music, this is selection of content they use: http://www.whatsbestforum.com/showt...-Music-Tracks-for-Speaker-and-Room-EQ-Testing. When I took the test, these are two tracks that I remember:

TC - Tracy Chapman, “Fast Car”
JW - James Taylor, “That’s Why I’m Here”

Was I incorrect in judging loudspeakers on such content and why?

This selection stemmed from worked done at NRC in 1992 to find out which type of content is most revealing:



Female pop is the most revealing one after pink noise. And guess what? That is what is in the test tracks they use and I listened to. Where are your list of test tracks and research that demonstrates them to be revealing?

On the room, this is again the venue that was used for testing:



There is an entire paper on its design. Here is one of its attributes:

"The key features of the room are its extremely low
background noise (NR-10), and a computer-controlled
speaker shuffler that places each loudspeaker under test
into the exact same position [17]. In this way, any
audible differences among the loudspeaker tests are
directly attributable to the loudspeakers and not from
positional differences."


Tell me what is wrong with this room and what room you have evaluated loudspeakers side by side.

On organization of scorecard, I have shown the picture. Loudspeakers are selected randomly for each row/piece of content. You are giving a score 1 to 10. That is it. Tell me what you would do different and why.

The question is not in the science involved in the correlation or the measurements - Harman experts know about their business! The big problem is that the areas of preference of many audiophiles are not given enough weight in this brilliant work. It why they are treated by some objectivists as poetry and other very unkind, even insulting terms. IMHO they wisely want to devaluate these aspects as they can not include them in their work.
What area is that exactly? Which one of you have ever done a side-by-side comparison of loudspeakers without being able to look at them as to know your outcome would be different? Why is it that reviewers when tested did not come up with a different outcome if they hear and value different things?

Another issue is that we are mainly shown selected summaries and marketing pictures pictures from the tip of the iceberg. F. Toole is much more than Harman, and there are many other scholars and knowledge people in the audio business - B&W had been showing pictures of the giant anechoic chamber they used in their advertisements of the 70's.
All the papers are available for the world to read. I have hidden nothing. Ask me questions and I will quote anything you want.

And sure, I would love nothing more than seeing some evidence from you all other than your opinion that all of this must be wrong. No, saying so and so loudspeaker company has anechoic chamber is not it. Show listening test results comparing loudspeakers and demonstrating efficacy of one design to another. That is what we are discussing. Do you have that and can share?
 

MylesBAstor

Well-Known Member
Apr 20, 2010
11,236
81
1,725
New York City

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
Tim,

It seems you misunderstand the main point. The measurements were correlated with some preferences - given by listening conditions, recordings, relevance of the subjects chosen to be analyzed, even by the organization of the scorecards people had to fill. The question is not in the science involved in the correlation or the measurements - Harman experts know about their business! The big problem is that the areas of preference of many audiophiles are not given enough weight in this brilliant work. It why they are treated by some objectivists as poetry and other very unkind, even insulting terms. IMHO they wisely want to devaluate these aspects as they can not include them in their work.

Another issue is that we are mainly shown selected summaries and marketing pictures pictures from the tip of the iceberg. F. Toole is much more than Harman, and there are many other scholars and knowledge people in the audio business - B&W had been showing pictures of the giant anechoic chamber they used in their advertisements of the 70's.

BTW, did you guess who keeps the copyright about my recent quotes on audiophile cables and active speakers?

I'm afraid you're the one who is missing the point. These studies were done with a broad cross section of trained and untrained listeners. And the were done repeatedly, for years, with the same results. Unless you're objecting that the study wasn't done exclusively with audiophiles, your objections are moot statistically.

Tim
 

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0

amirm

Banned
Apr 2, 2010
15,813
37
0
Seattle, WA
The reality I am referring to is nothing more than the LISTENING reality, and again, no panel I have ever heard or owned sounds like that. I also use (and post) my own measurements, but at the end of the day, reality to me is NOT graphs and measurements, and that is the basis of our differences; measurements help me figure out balance, they serve as the BASIS, and nothing more than that.
That is not what I showed you. I showed measurements, taken in a very specific way, correlating with listening tests results. It was not pure measurements. It was measurements that agreed with listening test results. Your reaction? "so STUFF THE MEASUREMENTS, for being unrepresentative of reality." And you say you didn't denigrate? What is this then? This is incredibly hard work and expensive research that has been shared with all of us. Again, I am OK with expression of disbelief. I know it is hard to believe ML did this poorly. Until such time that you sit your butt in the chair, compare its sound immediately to that of other loudspeaker and wonder what broken loudspeaker that was. The curtains open and you see the ML and reality sets in. You listen to them sighted afterward and all of those flaws are now apparent to you. Your eye has been opened. You can't ditch it. At least this is what happened to me when I now listen to them.

This is a reality that per my last post none of us have been exposed to. That is, hearing loudspeaker one after the other automatically, and with no ability to know their identity in advance. This is not a scenario that you have experienced so you can't say it must by definition be wrong. You have listened to these speakers sighted without quick switching and on critical material. You may also have non-critical ears and hence more forgiving of loudspeaker flaws. Most of these were true of me prior to taking the test. So between us, I have been in your shoes but you have not been in my shoes and literally hundreds of people from every group involved in audio who have taken these tests.

The fact you rated the ML a '4' is simply telling of your preferences, and that is reality for you - nothing wrong with that.
No, my ranking of loudspeakers matched countless others from reviewers to dealers to marketing people and engineers in audio. Here is one of the sample points again:



When I took the test, I was in two different groups. The first was high-end dealers. Just like the research indicates, when the curtains went up, vast majority of us had the same ranking from show of hands.

The second time was among a very select group of people who were there to get certified by Harman to be calibrate JBL Synthesis systems. These were hand selected people who build world class listening room (think Keith Yates). It also had a few Harman employees who were also attending to learn the science. We all agreed other than one new Harman employee who was in support function in Asia. I don't recall if he picked the ML or B&W as the best. The rest of us picked the same one as the best (JBL).

So no, it was not just me and my preferences. Whether it is in ad-hoc sampling I just shared, and formal published research, we are not born differently. We think we are. Even the researchers thought that way. But the outcome could not be more fortuitous. That vast majority us have the same sense of what is good sound. Not some random jungle of fidelity opinion that would render you all's assessment as fleeting and useless as that of reviewers.

This is to be celebrated. That trained or non-trained, reviewers and not, engineerings and marketing people, dealers and buyers, all have similar preferences when it comes to what is good sound in a sea of variations in loudspeaker tonality and performance. This allows us to now better engineer loudspeakers that we are confident to be performant for vast majority of people.

Are you the exception? You may be like that Harman employee. If so, why should we care about your preferences when it has little chance of agreeing with everyone else? Fortunately I think the odds are against you being the exception. You just think you are because you have not been tested in a controlled manner.


I have spent 13 years now with my current MLs (and 6 more before them with other MLs, plus 20+ years with Magnepans in my second, lesser system), and many an amp have come and gone - and apart from the 400RS no other has been able to drive the MLs correctly, though the same amps have had no issues driving much lighter loads. I could go into details about those amps' sound driving difficult loads... I have also posted in the past that all these by-gone amps have also shut down when driving even more harsher loads like the ML Summit and Summit X (impedance drop to ~0.7 ohms), at moderately high levels. So do I think Proceed amps are unworthy of listening comparisons at Harman or elsewhere, if we are to call this "research", where all parameters are supposed to be optimized? Absolutely, and feel free to disagree; but I remain unimpressed with Harman and anything written about their work. I take their "research" as yet another data point, and I throw it out the window. I expect you to do the same of what I just said.
None of this matters one bit in a discussion of audio science unless you can present some formal data that shows when you swap amplifiers, loudspeaker ratings with their massive difference in tonality changes. You have a theory that you have to test yourself and bring us data. Such data needs to be devoid of your audio beliefs and sighted evaluation. This is easy enough to test. Replace one of your loudspeakers with another brand and test them with your current amplifier behind a curtain. Have a few people take that test. Then replace the amp with another that you think puts the ML in bad light. Run the test again. Then report to us how loudspeaker preferences swapped in favor of the other loudspeaker.

Until then you are again sharing disbelief with us, not anything concrete that we can use to counter formal, published, peer reviewed research over 30 years whose authors are luminaries in the industry. And a theory that no one in the scientific and formal industry has brought up as an issue with the test or I assure you that it would have been tested and dismissed as an influence on loudspeaker preference.
 

BlueFox

Member Sponsor
Nov 8, 2013
1,709
406
405
However, reviewers claims they measure with their ears and so do I and everyone else, in the end, and I have a different set of measuring gear than they do, but I have seen no evidence of their measurements superiority over mine, in fact some of their statements for example about fuses reveal serious problems with their measurments system IMO.

Interesting sentence.
 

Phelonious Ponk

New Member
Jun 30, 2010
8,677
23
0
Amir said -

Replace one of your loudspeakers with another brand and test them with your current amplifier behind a curtain.

I'd love to see Harman do this in their listening room. Start with the best-testing speakers. Swap the Levinson amp with a Crown. Than maybe try a Krell and other high current, low impedance output SS amps. See what listeners hear. I suspect it would start a firestorm here that might even burn harder than this one.

Tim
 

microstrip

VIP/Donor
May 30, 2010
20,806
4,698
2,790
Portugal
We have parsed each one of these points and the only argument left is disbelief. It is not like any counter evidence has been put forward to justify the hypothesis you all have in the invalidity of these results. Here is one of them again:



These are different groups of people taking the test including the topic at hand, trained listeners and reviewers. Tell me why their ranking of loudspeakers matched that of everyone else. Please give me data. Don't just keep saying it can't be. That is not an argument.

On music, this is selection of content they use: http://www.whatsbestforum.com/showt...-Music-Tracks-for-Speaker-and-Room-EQ-Testing. When I took the test, these are two tracks that I remember:

TC - Tracy Chapman, “Fast Car”
JW - James Taylor, “That’s Why I’m Here”

Was I incorrect in judging loudspeakers on such content and why?

This selection stemmed from worked done at NRC in 1992 to find out which type of content is most revealing:

(...)

Female pop is the most revealing one after pink noise. And guess what? That is what is in the test tracks they use and I listened to. Where are your list of test tracks and research that demonstrates them to be revealing?

On the room, this is again the venue that was used for testing:

There is an entire paper on its design. Here is one of its attributes:

"The key features of the room are its extremely low
background noise (NR-10), and a computer-controlled
speaker shuffler that places each loudspeaker under test
into the exact same position [17]. In this way, any
audible differences among the loudspeaker tests are
directly attributable to the loudspeakers and not from
positional differences."


Tell me what is wrong with this room and what room you have evaluated loudspeakers side by side.

On organization of scorecard, I have shown the picture. Loudspeakers are selected randomly for each row/piece of content. You are giving a score 1 to 10. That is it. Tell me what you would do different and why.


What area is that exactly? Which one of you have ever done a side-by-side comparison of loudspeakers without being able to look at them as to know your outcome would be different? Why is it that reviewers when tested did not come up with a different outcome if they hear and value different things?


All the papers are available for the world to read. I have hidden nothing. Ask me questions and I will quote anything you want.

And sure, I would love nothing more than seeing some evidence from you all other than your opinion that all of this must be wrong. No, saying so and so loudspeaker company has anechoic chamber is not it. Show listening test results comparing loudspeakers and demonstrating efficacy of one design to another. That is what we are discussing. Do you have that and can share?

Again big pictures and more repetition on accessory secondary aspects, ignoring the main aspect - the establishment of preference and the broadness of preference that is introduced by stereo limitations.

You fail to understand people are not fighting you, we are just exposing alternatives. I do not say all this must be wrong - these are your own words. Others, much more knowledgeable than me, also exposed why this does not fit the high-end and explain why. After weighting it all I must say I agree with them -I have to say, in large part after reading F. Toole opinions on why he prefers multichannel, how and why stereo fools me, what several members of WBF said on stereo limitations, and how good I found that can be high-end stereo after we teak it.

I do not endorse people or manufacturers, but I respect and learnt from the high end manufacturers and distributors that create great systems, particularly those I often read at WBF and prudently avoid these debates... ;)

Keep on listening to Trary Chapman and James Taylor - we agree it is enjoyable!

BTW, for alternative schemes of evaluation our readers can look here : [url]http://www.linkwitzlab.com/accurate%20stereo%20performance.htm
[/URL]

German broadcast authorities and the BBC had different score cards for speaker evaluation - I read an article about them long ago. Wireless World magazine had a series on how to select speakers. The ITU recommendations on listening material suggest a more comprehensive range than your suggestions.

WBF interested readers must be prepared to read elsewhere. Think by themselves. Then answer the question : is my system assembled according to the rules and principles I am defending or preferring? If not WHY?
 
Last edited:

esldude

New Member
@esldude: Just click my system link on the bottom to see the current graphs (post #1); 1/3 octave, and I intend to get higher "resolution" at some point

So from what I am seeing in that one post you used an RS meter and some test tones. Those tones were either centered at 1/3 octave, or perhaps 1/3 octave bands of pink noise or maybe warble tones or such. Don't know just wondering. I have done those with pink noise and warble tones. You end up with something more smoothed than even 1/3 octave smoothing. With warbles for instance you can sometimes see the meter needle staying steady or jumping around a bit. Yet you have to pick a number, an average of some sort to write down. The results are useful and far better than nothing, but just from experience you get a very smoothed result.

I also have used Tact correction for some years. You get unsmoothed measurements with those. I also have used impulse and frequency sweeps. All those are much more detailed, and nearly all speakers look more ragged that way. REW also can give results with various degrees of smoothing along with waterfall plots. All the ESL's seem to have rather hashy irregular decay on waterfall plots in the upper frequencies. They don't sound that way, but the result is there. Maybe one of those cases of ears don't hear like microphones, and software.

I would like a little more detail on the Harman measuring of panels. Their results don't look unbelievable to me. However, having seen pics of their rig with the 9 microphones covering an arc, they may be rather close to get correct results for what happens at listening distance in a room for panels. Maybe they do something to compensate for that or maybe not.

None of which makes me doubt their results very much. I do believe the overall trend of the ML they show is correct and is why it is scoring lower. Even if not the reason, they have lots of data where it scores lower. Having experienced myself made a fool when I go from sighted to blind listening it isn't hard for me to believe. Many people will even then simply be unable to change their gut feeling when confronted with that. It is not easy and I understand it. It is even harder if you have only listened, and compared sighted. I haven't had the chance to do blind listening of speakers. Not an easily done thing. But I can believe what they are saying happens.

You can see they are getting excellent reviews of their Revel line. They are doing something right. The measurements by people like Stereophile seem to confirm they are made to hit the target you see developed in this research. This is true whether it is the Ultima or Performa series. There was a mid-level JBL floor stander a few years back reviewed favorably by Stereophile and its measurements also appear to be based upon that same design goal. It all fits, and actually is good news. I don't quite get the level of hostility shown to it. A strong sense of not wanting to believe them.
 

esldude

New Member
Amir said -



I'd love to see Harman do this in their listening room. Start with the best-testing speakers. Swap the Levinson amp with a Crown. Than maybe try a Krell and other high current, low impedance output SS amps. See what listeners hear. I suspect it would start a firestorm here that might even burn harder than this one.

Tim

Nah, probably just get ignored. I have been amazed the old tricks of Bob Carver never generated nearly the heat I expected. You can make your little few hundred dollar amp sound like a Levinson. Later you can make it sound like a C-J Premier tube amp. Indistinguishable from them. So why is it we have 25 years of all these other amp designs then?
 

still-one

VIP/Donor
Aug 6, 2012
1,632
150
1,220
Milford, Michigan
"The key features of the room are its extremely low
background noise (NR-10), and a computer-controlled
speaker shuffler that places each loudspeaker under test
into the exact same position [17]. In this way, any
audible differences among the loudspeaker tests are
directly attributable to the loudspeakers and not from
positional differences."


Tell me what is wrong with this room....................

I guess nothing is wrong with test except you ignore any set-up instructions by the various manufacturers and potential amplifier/speaker interaction problems. Never have I read that I can just plop them down anywhere and they will perform optimally.

This statement: "In this way, any
audible differences among the loudspeaker tests are
directly attributable to the loudspeakers and not from
positional differences
."
is absurd.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing