The Absolute Sound’s Review Methodology: First Principles

Yes advertising is an essential part of them however the guaranteed good review I wont go that far however is there some influence and preference to what gets reviewed and who does these reviews I think that for sure exists.
I’ve never read a bad review in 30 years of reading absolute sound and stereophile. There is a mutual ass kissing which goes on between the manufacturers/advertisers and the so-called reviewers. These reviewers are also often paid in long-term loans of equipment or extreme discounts if they want to buy it. The whole industry is tainted. There were some reviewers who I had more faith in like art Dudley. But he’s no longer with us. I absolutely do not trust Robert Harley or anyone over at Stereophile. Besides a review is just one person‘s opinion and it’s in a different room with different associated equipment and different cables so this is a limited value anyway. You have to go hear the product for yourself.
 
  • Like
Reactions: A-Line
They are easy to miss.
Yes, because they almost never occur. If I so-called reviewer places, a bad review, the company will never advertise with the magazine. Since the goal is to sell advertising, they do not want to upset the manufacturer. So they might say that a product has a certain sound signature or could be improved in one area or another, but they never actually say that it’s a bad product or that someone shouldn’t buy it unless it’s some kind of little snake oil accessory that came from China that they’re not going to be getting any advertising revenue from anyway.
 
  • Like
Reactions: Holmz
. . . bad review,

they never actually say that it’s a bad product

I'll give you the benefit of the doubt and ask you a genuine question: when was the last time you auditioned an amplifier or a line stage pre-amplifier or a DAC in your home system and concluded the product was actually "bad"?
 
  • Like
Reactions: Lee
Question: Why do you do subjective reviews?

Answer: We don’t. Or for the most part we try not to make that the core of our reviewing. We aim to do observational, objective reviews. Now, there is some confusion about terminology in which “quantification” is “objective” whereas human “observation” is “subjective”. But this is wrong. That notion incorrectly glosses over a critical distinction. “Subjective” in the dictionary means human reactions that primarily involve feelings. But humans are also capable of observing objectively.
I think that this highlighted part is where the notion that what TAS are fundamentally doing is objective assessment technique goes wrong for me… that subjective by definition means mostly just responses around feelings. I’d suggest this is a skewed definition and subjective human perception isn’t just emotional responses to things at all… human perception can be highly analytical and rational process and utilise benchmarks and standards to be more objective in its application and interpretations and less biased in its nature but as long as personal perception is the primary tool it’s still a subjective assessment technique… just done more objectively than if it was less structured.

From my years in training and assessment subjective assessment in qualification frameworks are simply assessment techniques based in interpretations through a personal human perspective. Training and assessment qualification is not exactly the same thing as audio reviewing but in principle the assessing theory and context is much the same I’d suggest.

So if better method is applied it can be subjective assessment but done in a more objective manner. This is a good thing generally. Subjective assessment still but more objective and rigorous and ideally a more fair framework.

It’s ultimately just a TAS interpretation in this (subjective) so ironically having a subjective viewpoint has led them to have a different definition but I’d suggest it’s possibly the less common take on it. I guess if they had some recognised assessment theory that would help. There’s plenty of assessment theory available on the internet. If they did an internet search more specifically on subjective assessment techniques versus objective assessment techniques they’d probably see their interpretation is possibly not a usual one.

But definitions on technique aside I find nothing compellingly unusual about how TAS then actually approach a review… it’s all pretty standard stuff and if they choose to define their approach in a different way that’s not a real issue as it’s definitely not about compliance.

Are there any journalism standards required in assessing audio gear… what percentage of audio reviews are done by trained or qualified journalists or people trained in assessment practice at any rate. Not sure.

Perhaps why TAS play the objective assessment card could maybe be about trying to establish credibility because there is a lack of standards in what reviewers actually do in terms of assessment… it’s self reviewing and highly variable. I do think there is too much reviewer beating up by hobbyists in general. There seems to be to be good and bad and quite a deal of variation in standards in general. Perhaps it’s not a great system. Not being an apologist exactly but I guess we do have the system we paid for.
 
Last edited:
  • Like
Reactions: Sampajanna and tima
Perhaps why TAS play the objective assessment card could maybe ve about trying to establish credibility because there is a lack of standards in what reviewers actually do in terms of assessment… it’s self reviewing and highly variable. I do think there is too much reviewer beating up by hobbyists in general. There seems to be to be good and bad and quite a deal of variation in standards in general

I agree there are good and bad reviews and and a great deal of variation among them in cogency, clarity and completeness.

For one thing, it is a hobby for most of us. From the list of reviewers almost all of them have day jobs. There are but a handful of people who make a living with audio reviews and columns. Audio hobby publishers compensate reviewers in the only way they can while remaining viable with ad revenue. How many charge for a subscription?

If you want credentials and assessment training, this hobby may not be the place to find that. Imo, that's not going to happen and I'd be sceptical of those claiming expertise beyond what is out there today.. Those who feel such are needed should be prepared to pay a much higher price for the work than they are paying now.

Several 'commentators' (being kind) on this thread who think it's all payola don't have a clue about audio reviewing . I encourage them to avoid reading reviews and rely on all the free unvetted forum advice and opinion.

I’d suggest this is a skewed definition and subjective human perception isn’t just emotional responses to things at all… human perception can be highly analytical and rational process and utilise benchmarks and standards to be more objective in its application and interpretations and less biased in its nature but as long as personal perception is the primary tool it’s still a subjective assessment technique… just done more objectively than if it was less structured.

What is impersonal perception? If I want a description of how a system or component sounds I want it from a person who has spent a couple months with the item under review who knows how to describe what he hears.

Reading through this thread I conclude that the use of 'objective' and 'subjective' bring little insight. It is easy to say "it's just his opinion" or as at least one regular refers "it's just his subjective opinion". I wish someone would lay out the meaning of the phrase "objective audio review" as that pertains to sonic description.

Reviewers make explicit and implicit judgement about what they value. Imo too many reviewers conflate the perceptions they describe with what is valuable in the sound they hear. Pay attention to what the review mentions and what it does not mention. It is possible to keep value and perception separate and to be clear when passing judgement.
 
that subjective by definition means mostly just responses around feelings. I’d suggest this is a skewed definition and subjective human perception isn’t just emotional responses to things at all… human perception can be highly analytical and rational process and utilise benchmarks and standards to be more objective in its application and interpretations and less biased in its nature but as long as personal perception is the primary tool it’s still a subjective assessment technique… just done more objectively than if it was less structured.

. . .

So if better method is applied it can be subjective assessment but done in a more objective manner. This is a good thing generally. Subjective assessment still but more objective and rigorous and ideally a more fair framework.

It’s ultimately just a TAS interpretation in this (subjective) so ironically having a subjective viewpoint has led them to have a different definition but I’d suggest it’s possibly the less common take on it.

If they did an internet search more specifically on subjective assessment techniques versus objective assessment techniques they’d probably see their interpretation is possibly not a usual one.

if they choose to define their approach in a different way that’s not a real issue as it’s definitely not about compliance.
Perhaps why TAS play the objective assessment card could maybe be about trying to establish credibility because there is a lack of standards in what reviewers actually do in terms of assessment… it’s self reviewing and highly variable.

I think you have explicated in the way that I have been looking for my thinking about this topic which thus far has been nascent. I think I agree that TAS' definition of subjective review as nothing more than feelings simply is not correct, and is what derails me in the first place from agreeing with TAS' objective assessment characterization.

I think TAS' definition of subjective review as nothing more than feelings simply is not correct. I know that the observation of the height of the house versus the height of the car example does not illustrate the objective assessment definition which TAS is trying to validate.
 
Last edited:
  • Like
Reactions: the sound of Tao
Here's a bad review IMHO from the measurement side read JA summary.


Rob :)
 
Here's a bad review IMHO from the measurement side read JA summary.


Rob :)

I am very confident Tascam was talking about subjective "bad" reviews, not reviews based on objective measurements.

I believe that objective measurement reports are inherently positive and not normative.

Of course if an audio component shorts out and sparks up upon being plugged in and blows the house circuit breaker I would characterize that as an objectively bad component.
 
Last edited:
I'll give you the benefit of the doubt and ask you a genuine question: when was the last time you auditioned an amplifier or a line stage pre-amplifier or a DAC in your home system and concluded the product was actually "bad"?
2003. Rowland Conentra 2. Boring and grainy sounding on vocals.. 1994. Mark levinson 38 preamp; boring and rolled off sounding, No. 35 processor; absolutely non-involving and dark sounding. I ended up with a Theta Gen 5 which was great.
 
  • Like
Reactions: mtemur
I am very confident Tascam was talking about subjective "bad" reviews, not reviews based on objective measurements.

I believe that objective measurement reports are inherently positive and not normative.

Of course if an audio component shorts out and sparks up upon being plugged in and blows the house circuit breaker I would characterize that as an objectively bad component.

Hello Ron

Simple question what's the difference if one, subjective or objective, is "bad"? Doesn't that slant the review?

The notion that you get a false equivalence by using a reference as a comparison to get an objective view vs measurements is nonsense.

Can listening to a speaker reveal impedance issues? By listening to a speaker can you form a graphical image in your mind for what your are hearing? Can you see holes in the response because of poor crossover design?

The answer is no to all of the above.

For a balanced and meaningful review you need both measurements and a listening evaluation.

Just comparing something to a reference doesn't tell you enough. Not even going to get into what the reference may be, if it should even be used as a reference and how it was chosen.

Rob :)
 
I think the topic of bad reviews has been discussed many times. If a reviewer gets a "bad" product then they simply tell the manufacturer that this is not going to work. They give them some feedback and move on. Of course this doesn't help the consumer. I also find that when reading reviews one needs to read between the lines. Often there are some subtle comments made that indicate what the reviewer doesn't like about a component.
 
2003. Rowland Conentra 2. Boring and grainy sounding on vocals.. 1994. Mark levinson 38 preamp; boring and rolled off sounding, No. 35 processor; absolutely non-involving and dark sounding. I ended up with a Theta Gen 5 which was great.
Thank you. Your negative impressions of the components you found to be bad strike me as subjective preferences.

We don't see abjectly "bad" subjective reviews because major components do what they are supposed to do, even if not to everybody's personal, subjective cup of tea. This is not an indictment of the subjective reviewing process or business of our industry.
 
Hello Ron

Simple question what's the difference if one, subjective or objective, is "bad"? Doesn't that slant the review?
I'm really sorry but I'm afraid I don't understand the question here.

The notion that you get a false equivalence by using a reference as a comparison to get an objective view vs measurements is nonsense
Are you saying here that you are not agreeing with the TAS view that live music as a reference means you can make an objective observational assessment?

Can listening to a speaker reveal impedance issues? By listening to a speaker can you form a graphical image in your mind for what your are hearing? Can you see holes in the response because of poor crossover design?

The answer is no to all of the above.
I agree.
For a balanced and meaningful review you need both measurements and a listening evaluation.
I personally don't care about measurements, but I often am curious to see them.
 
Hello Ron

Simple question what's the difference if one, subjective or objective, is "bad"? Doesn't that slant the review?

The notion that you get a false equivalence by using a reference as a comparison to get an objective view vs measurements is nonsense.

Can listening to a speaker reveal impedance issues? By listening to a speaker can you form a graphical image in your mind for what your are hearing? Can you see holes in the response because of poor crossover design?

The answer is no to all of the above.

For a balanced and meaningful review you need both measurements and a listening evaluation.

Just comparing something to a reference doesn't tell you enough. Not even going to get into what the reference may be, if it should even be used as a reference and how it was chosen.

Rob :)

I am not sure ^it^ is that clear cut.
People can hear a wonky frequency response, and some can pick out edge diffraction issues.
Of course those also comport with the reality of the sound.
Cabinet resonances can also be heard as well as seem in an impedance graph.

The reason to have objective measurements is to be able to talk about things in an engineering, or theoretical sense. But they also affect the sound in the subjective sense.
They are both correlated as well as causal.
 
Thank you. Your negative impressions of the components you found to be bad strike me as subjective preferences.

We don't see abjectly "bad" subjective reviews because major components do what they are supposed to do, even if not to everybody's personal, subjective cup of tea. This is not an indictment of the subjective reviewing process or business of our industry.
You forgot about the corrupt reviewers that are paid off with free long term loans, extreme discounts, and other perks. The industry to a large degree is review driven. I learned the hard way not to trust anything in these magazines. I was royally screwed 30 years ago when I bought all this Jadis equipment after reading a stereophile review. I had the same result with the Mark Levinson No. 35 which received a rave review. It was nicely built but sounded dark and terribly sterile.
 
The glossy mags routinely offer a 80% discount off newsstand prices for those that purchase a 1 year subscription.

Seems to me that the glossy mags are well aware that they've got a serious credibility issue just based upon that alone, not to mention other issues...
 
Last edited:
  • Like
Reactions: PeterA and Tascam
You forgot about the corrupt reviewers that are paid off with free long term loans, extreme discounts, and other perks. The industry to a large degree is review driven. I learned the hard way not to trust anything in these magazines. I was royally screwed 30 years ago when I bought all this Jadis equipment after reading a stereophile review. I had the same result with the Mark Levinson No. 35 which received a rave review. It was nicely built but sounded dark and terribly sterile.
I realize you are new to the forum but there was a whole 41 page thread dedicated to the topic of long term equipment loans to reviewers.

 
You forgot about the corrupt reviewers that are paid off with free long term loans, extreme discounts, and other perks. The industry to a large degree is review driven. I learned the hard way not to trust anything in these magazines. I was royally screwed 30 years ago when I bought all this Jadis equipment after reading a stereophile review. I had the same result with the Mark Levinson No. 35 which received a rave review. It was nicely built but sounded dark and terribly sterile.
Can I ask what D/A converter you replaced when you purchased the 35? What transport were you using? 31?

Thanks.
 
Since you mentioned the 35 I went back and re-read that review online. As far as review standards go I would consider this a well written review. He compared the 35 with two different top transports (The CEC and the 31). He compares different digital cable types. And he compares it agains other DACs (The 30 and the Theta Gen III). By today's standards he is going the extra mile or two. He states that the Gen III is more detailed, cooler, leaner and excells at transient attack. Since you went the the Gen V I would suppose that you simply perferred the sound of Theta Digital over Mark Levinson. Based on this my guess would be that you never liked the sound of Wadia. I owned all of these brands at one time or other (along with several others).

Going back to the review I also noticed his system consisted of Wilson Watt Puppies and Krell amplification. I would guess that he needed something to dial back the etched detail that this combination produced. So I don't think he was lying when he said he really liked it.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu