It’s All a Preference

N DBT doesn't do anything for me.

Would yoiu have ANY curiosity to sit thru a harmon demonstration?

Man, I'd be in like a rat up a drainpipe.

Amit, any idea of how much a days sit thru would cost if it were organised as a field trip? Imagine, all these audiophile meetings comparing this speaker with that....heck what a meeting THAT would be. Get on a bus, up you go, make a weekend out of it (visit any hi end stores in town) that sort of thing.

What I'd love to see is a team from stereophile (substitute any other mag), all the reviewers going along to a demo like this...make a sighted shortlist and rankings thereof...see how well it stacks up blind!

Nope, I can't imagine it happening either.

A question to those who see no worth from dbts...what (if anything) would be enough to 'change your mind'?? Serious, not stirring the pot.

Personal experience? Is that enough?? You could not hear the difference you thought yoiu could, would that trump your earlier thoughts or would you trust your earlier conclusions so much you would find fault with the test.

Just trying to get a handle on what is required.
 
And I mean neutral in the fact that no electronics sound exactly like live music and the circuit has zero colorations. We just aren’t there yet people.

How could it?? I my mind we proably have at least what it takes for the electronics to be neutral. The issue is you can't replicate the sound field generated at a live event with any accuracy as it is always moving a target that changes form seat to seat and of course differemt venues.

The acoustics of the real event are missing. Without them you have no chance of fooling your brain. It doesn't sound right because it's just the shell of the live performance with the wrong spacial clues from your room.

Rob:)
 
Amir, the Harman testing procedure doesn't allow for valid evaluation of dipole speakers.

Hello Miles

Not so. Right from the man himself

http://www.whatsbestforum.com/showthread.php?1327-Testing-of-Dipoles


What I'd love to see is a team from stereophile

Hello Terry

If I remember correctly John Atkinson from Stereophile has been there not sure about him participating in testing. There is an article about his expereince somewhere on the Stereophile sight.

Rob:)
 
Last edited:
Hello, jap. He asked. I responded. That's where my thoughts lay. If one wants the truth, lay it out on the table. :)
 
The information about the work of Harman is really interesting for creating some parameters for speaker development in an efficient and rationale way, but does not in any way indicate that these parameters are unique and needed for quality.
Quoting myself in bold


How is that?

Amir,

You present data on the FR of just four speakers. There are many known factors that can define sound quality of a speaker - distortion, dynamic compression, resonances of box, delayed resonance. Some manufacturers consider that these parameters are more important than FR. What can prove to a non-expert in speaker design that just the FR and dispersion are the two critical factors for my audiophile happiness?

Did you hear about the Frog and the Scientist? ;) Unless we have all the details we can never be sure of the conclusions.

As you participated in the tests and have access to privileged Harman information, perhaps things are more clear to you. For example I still have doubts even on the interpretation of you first bar graph - considering that N people participated in the tests does it imply that X/N preferred x type? This seems scientific market research for me, not in depth audio science research.

BTW, I have the same type of doubts on most scientific data presented by many other manufacturers - Harman are more exposed to my criticism because they use it openly in their marketing.

Thinks are not easy also because there is a large difference between science and technology, and most of the time marketing mix them.
 
Bruce-Really, don't you think what you said is true? If we don't like the sound of a particular mastering engineer, chances are good that after several trys with the same result that we will not buy recordings we know were engineered by someone who we don't like the way their recordings sound.

While I am with mep in the basics of the discussion here, I beleive it is not that safe to tag a specific studio engineer based on his work on a particular album, there are so many variables to consider like the studio infrastructure, ambience, console (there were some who had tubed and SS consoles depending on availability) and even producer/artists soecifications on how the master will end up....
 
You present data on the FR of just four speakers. There are many known factors that can define sound quality of a speaker - distortion, dynamic compression, resonances of box, delayed resonance. Some manufacturers consider that these parameters are more important than FR. What can prove to a non-expert in speaker design that just the FR and dispersion are the two critical factors for my audiophile happiness?

Those parameters have the most weight and and are the best predictors of how a speaker will be ranked. They take the competitors speakers and test them blind against their own speakers through the development stage. When theirs "WIN" they go to market.

One of the parameters they use to design the speakers is the predicted in room response curve. This is derived from the 70 point spinarama measurements system they use. One of their targets is a flat response in room. Take a look at this measurement. This is an averaged in room response from a reviewers room, at least half the rise in the measured bass response is from the measurement technique. Above 200 Hz that is a very good in room measurement for any speaker. We may take issue with exact curve but the point is that's a real life example that there design methods and measuring techniques achieve their stated targets. They know what they are doing.

Rob:)
 

Attachments

  • 1400 Measurement..jpg
    1400 Measurement..jpg
    40.6 KB · Views: 129
Hello Miles

Not so. Right from the man himself

http://www.whatsbestforum.com/showthread.php?1327-Testing-of-Dipoles




Hello Terry

If I remember correctly John Atkinson from Stereophile has been there not sure about him participating in testing. There is an article about his expereince somewhere on the Stereophile sight.

Rob:)

That's not what the pictures told me when Harman presented their testing methodology at RMAF (?). The speakers are far away from any back wall. In fact so far that light takes a year to reach you.
 
Those parameters have the most weight and and are the best predictors of how a speaker will be ranked. They take the competitors speakers and test them blind against their own speakers through the development stage. When theirs "WIN" they go to market.
Rob:)

If you just accept it as a dogma, no problem. Dogmas should not be disputed or questioned. :)
 
While I am with mep in the basics of the discussion here, I beleive it is not that safe to tag a specific studio engineer based on his work on a particular album, there are so many variables to consider like the studio infrastructure, ambience, console (there were some who had tubed and SS consoles depending on availability) and even producer/artists soecifications on how the master will end up....


Fernando-I didn’t mean to imply that you make a decision on whether you like a mastering engineer based on the results of one recording. Most of the greats have a long and distinguished list of recording credits and over time, you have come to either like their methods and results or you don’t. RVG is a case in point. Many people love his recordings and some people don’t. But one thing for sure, you pretty much know what you are going to get with a RVG recording. If I a buy a recording and I see “mastered by Doug Sax” or "mastered by Bob Ludwig,” I feel pretty confident that I’m going to hear a decent recording.
 
As far as Bob Ludwig, that may have been true in the past, but lately I have been burned by screaming loud, clipped, zero dynamics albums
that he has mastered.

Coldplay's latest comes to mind, as does a stack of others he did last year. The biggest stinkers are the Queen box sets. Loud,loud, loud,with
midrange blare you can stand for about 15 minutes.

Now truth be told, he may have received a master that was already beyond help, mixed at a ridiculously loud level, and you can't polish a turd.

I personally would not put my name on something like that and walk away. He is booked a year in advance and can easily afford to turn down
jobs.

Fernando-I didn’t mean to imply that you make a decision on whether you like a mastering engineer based on the results of one recording. Most of the greats have a long and distinguished list of recording credits and over time, you have come to either like their methods and results or you don’t. RVG is a case in point. Many people love his recordings and some people don’t. But one thing for sure, you pretty much know what you are going to get with a RVG recording. If I a buy a recording and I see “mastered by Doug Sax” or "mastered by Bob Ludwig,” I feel pretty confident that I’m going to hear a decent recording.
 
Now truth be told, he may have received a master that was already beyond help, mixed at a ridiculously loud level, and you can't polish a turd.

I personally would not put my name on something like that and walk away. He is booked a year in advance and can easily afford to turn down
jobs.

Remember the loudness wars. He may have been directed to master the recording for loudness. Good thing my Queen LPs don’t sound like what you described.
 
Hey Mep:

But this is counter intuitive to what you said. You say that when you buy a RVG or Ludwig mastered project you pretty much
know what you are getting.

Apparently not.

Bob talks a good game about being a soldier in the loudness wars, but he has done nothing to stop it in my opinion.

Why pay Ludwig his large fee to put out a turd with zero dynamics when you can get any hack out of engineering school
to do the same thing?

As far as being "directed" to crush a master, he has enough influence in the business that he can walk away from such a project. He chooses not too.

Remember the loudness wars. He may have been directed to master the recording for loudness. Good thing my Queen LPs don’t sound like what you described.
 
And there are engineers (such as Vic Anesini) who consistently manage to avoid over-compression. BTW, I am very encouraged that the new Jack White CD has very good dynamics (even vinylistas say it sounds better than the LP); regardless of your opinion of the music, it is critically acclaimed and selling well, so hopefully might have some impact on the loudness wars. Another recent release of note, the HDTracks of Dream Theater's newest is a different mastering than the CD, with notably more dynamic range.
 
If I remember correctly John Atkinson from Stereophile has been there not sure about him participating in testing. There is an article about his expereince somewhere on the Stereophile sight.
Many years back, a group of us did visit Harman and had a relatively brief time in their listening/test room. Among the exercises, we participated in a blind test of 3-4 small speakers and there was a near unanimity in preference for a particular JBL speaker over the other name-brand choices. I cannot recall the other specifics but most of us were surprised.

If I may, I would like to put in a good word for DBT. It is an artificial and somewhat stressful experience but I always learn an awful lot from it as the necessity to make a choice forces me to identify why I am making that choice. Those insights (sic) can be transferred, usefully, to non-blind listening.
 
Amir,

You present data on the FR of just four speakers.
I post that because that data is public and easily referenced. The actual research has a far larger base. From Dr. Olive's AES paper, A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part II - Development of the Model

"A new model is presented that accurately predicts listener preference ratings of loudspeakers based on anechoic measurements. The model was tested using 70 different loudspeakers evaluated in 19 different listening tests. Its performance was compared to 2 models based on in-room measurements with 1/3-octave and 1/20-octave resolution, and 2 models based on sound power measurements, including the Consumers Union (CU) model, tested in Part One. The correlations between predicted and measured preference ratings were: 1.0 (our model), 0.91 (inroom, 1/20th-octave), 0.87 (sound power model), 0.75 (in-room, 1/3-octave), and ?0.22 (CU model). Models based on sound power are less accurate because they ignore the qualities of the perceptually important direct and early reflected sounds. The premise of the CU model is that the sound power response of the loudspeaker should be flat, which we show is negatively correlated with preference rating. It is also based on 1/3-octave measurements that are shown to produce less accurate predictions of sound quality."

There are many known factors that can define sound quality of a speaker - distortion, dynamic compression, resonances of box, delayed resonance.
Those are certainly factors and I mentioned one of them earlier (i.e. dynamic range, bass extension). Can we agree that the 70 speakers tested had widely varying metrics in this regard? Yet, with some small exceptions, they could not trump deficiencies in frequency response of the speaker.

To be sure, Harman has a company pays huge attention to these other factors too. It is just that they start with the right fundamentals: get the total response that reaches a listener -- sum total of direct and some of the reflected energy -- to have a smooth response (not necessarily flat!). Get that right and then focus on other issues you mention.

Some manufacturers consider that these parameters are more important than FR.
They do but then we are at the mercy of a "gray haired" designer thinking he is right. Why not put forward some data that says if you took 10 of us together, we would prefer that design priority over Harman's findings? Why not publish an AES or ASA paper that says Harman is wrong?

What can prove to a non-expert in speaker design that just the FR and dispersion are the two critical factors for my audiophile happiness?
If large scale listening test data, combine with the measurements that correlate with it doesn't do that, then I think we are saying that we like to flip a coin and decide who is right. Who would buy an amp that has a 3 db dip in 2K to 3K vs one that doesn't? Who can't hear that dip if we took the flat response speaker and subjected it to that?

To be sure, there is absolutely room for additional factors that impact speaker performance. There is a reason Harman builds a range of products using completely different technologies. But they all share a principal of goodness. Of note, this change did not come easy to Harman. With different divisions all thinking they had the answer (think JBL and Revel), getting everyone to agree was hard. But they eventually come.

It is also true that Dr. Toole does take this idea to point of importance that perhaps is a bit too far. I think that is necessary to get the point across because there is so much disbelief out there. I have had arguments over validity of their double blind tests with some of the biggest champions of double blind testing such as Ethan and Arny. Both seemed to think you don't need double blind tests of speakers because the difference is too large! Once there though, and we let go of our assumptions, then it is OK to deviate some from this thinking. Surely the person at Harman sweating the new tweeter thinks beyond just accomplishing this goal with respect to frequency response.

Did you hear about the Frog and the Scientist? ;) Unless we have all the details we can never be sure of the conclusions.
Question becomes what I put to Myles. Do we then give up because we are not sure? At work, we get tons of speakers that come through our shop for evaluation. Is it by accident that they can't outperform the Revels when we put them side by side? My team is always ecstatic about these new brands. But as soon as we AB, it becomes clear that if you use proper research, you get better product. Again, there are ways to beat Revel speakers. If cost is no object and neither is size, you can do things that you can't do with a diminutive of a Revel speaker. But imagine how much better that other speaker would be if they also followed other factors that matter.

As you participated in the tests and have access to privileged Harman information, perhaps things are more clear to you. For example I still have doubts even on the interpretation of you first bar graph - considering that N people participated in the tests does it imply that X/N preferred x type? This seems scientific market research for me, not in depth audio science research.
If you are asking if 100% of the people agreed with one speaker being the best, no. As the data above shows, there is high correlation but not absolute conclusions. Whether that is due to people being poor judges of quality at times, or some other factors in play, it is hard to say. What is not hard to say is that those factors do not in any way trump the research results presented. If you deviate from them, you better have darn good reason and research to back your counter approach. A glossy brochure and impressive looking speakers don't do it.

BTW, I have the same type of doubts on most scientific data presented by many other manufacturers - Harman are more exposed to my criticism because they use it openly in their marketing.
There is no marketing here. Harman's research can be used to heavily hurt their business. Dr. Tool in his classes and books talks about $500 speakers that follow this scheme. Yet the company makes $25K speakers in Revel line and up to $65K in JBL Synthesis. What separates these are some of the factors you mention. The JBL has dynamics that literally cause your ceiling to fall down if you are not careful :). And the lack of distortion in the Salon 2 is remarkable. Sure, there is some implicit marketing here. I won't deny that. But I think one would be ignoring very good data to hang one's hat on this notion. :)

Thinks are not easy also because there is a large difference between science and technology, and most of the time marketing mix them.
And how is the mix with the high-end companies that Myles listed? I would say the percentage of marketing is off the chart there. They use technical buzzwords to be sure, but that is where it ends. Ultimately they show no objective data that proves the efficacy of the design. Surely if the goal is that we achieve better results with their speakers, they show some comparative listening tests that proves that point. But they show none.

Let me finish by saying that I am not trying to sell you on Harman as a company or products they manufacture. Just the notion that we are not lost in the woods, trying to interpret every manufacturer's claims independently. We do have a measure of goodness here and let's use it to evaluate products and see if in our mind, they do correlate. I have done this independently of Harman's research. I have gone with speakers that had the right buzzwords, that sounded convincing as a better approach to speaker design. Then I had my nose bashed in when I could not convince customers, that they were better. In our case, we had them side-by-side in our main theater behind the curtain and with a flip of a switch, we could go from JBL to the other brand. And the other brand lost, despite being more expensive.
 
Hey Mep:

But this is counter intuitive to what you said. You say that when you buy a RVG or Ludwig mastered project you pretty much
know what you are getting.

Apparently not.

Bob talks a good game about being a soldier in the loudness wars, but he has done nothing to stop it in my opinion.

Why pay Ludwig his large fee to put out a turd with zero dynamics when you can get any hack out of engineering school
to do the same thing?

As far as being "directed" to crush a master, he has enough influence in the business that he can walk away from such a project. He chooses not too.

Well, if Ludwig is turning out trash now, I guess I should have said you used to know what you were getting. I wasn't aware of the remasterings of his that you complained about since I don't own any of them.
 
So, it's all preference. Fidelity to the recording isn't important because the recording isn't the point, and even if it was, components can measure the same and sound different so they tell us nothing. AB/X tests don't tell us anything because....we have so many reasons to deny them I can't even keep track...there's still a difference between those components that measure the same, even if all those people couldn't hear it. And none of that matters anyway, because it's all preference. Bose is better than Wilson if you like it better. The simplicity is liberating. What do we talk about now?

Tim
 
Nirvana

Yes, he has turned out some real stinkers lately. Another one that is truly horrid is the new Nirvana remaster, both the CD and the 96/24 download.

Even our own Bruce Brown has said on other forums it just sucks.

That is not to say all the things he has put out are bad. A bunch from last year sound great, with plenty of DR. The fact that it varies so greatly means he is not establishing control of the process.

I believe I have read that our own Barry Diament decided to walk away from major label mastering due to the fact he was being asked to turn out dog ****. Certainly I don't want to put words in his mouth but I am almost sure he said this was the impetus behind him starting Soundkeeper.

The one mastering/archiving engineer who I know has never crushed a recording is Peter Mew at Abbey Road. He has even detailed the evils of over compressing and limiting in some of the liner notes of releases he was involved in.

Well, if Ludwig is turning out trash now, I guess I should have said you used to know what you were getting. I wasn't aware of the remasterings of his that you complained about since I don't own any of them.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu