Reviewers pointing out sonic flaws in audio equipment

I have some questions exactly on topic. The HFN review under discussion awards an 89% score. Here are my questions:
  • Does anyone know what a score of 100% means?
  • Does anyone know if a score higher than 89% (say, 91% or 93%) has ever been awarded to an amplifier?
  • If so, can anyone tell (by reading the relevant reviews) what are the precise reasons a 93% amp is better than an 89% amp?

From memory,
I think 90% is the highest given out so far, and that is for 3-5 products total.
89% is pretty exceptional as well.
From what I can tell as a subscriber the % is a combination of the subjective review, engineering design-build-quality-etc, measurements-performance, and possibly price.

And you would need to write into Hifinews for an explanation on what would be needed to hit above this, but as mentioned not many products hit 89% and much less for 90%.
Cheers
Orb
 
Although many magazines have similar systems - the Martin Colloms point system, the german Audio and Stereoplay, Stereophile classes, I have to say I disagree with such systems.

Unless you read the full review and understand what was behind the points, this system can lead to false conclusions and judgements. Also, as general equipment performance has been improving with time , and our criteria to assess performance also changed, some bizarre things can happen. One example of this are the current Martin Colloms lists, freely available at the HifiCritic site.
 
I tend to agree with you Microstrip,
but the Hifinews % works very well if you are a subscriber.
I have found it very consistent and it never creeps ever upwards as other point systems do.
It probably works because those factors I outlined have no explicit values as a true points system, they are considerations to the overall score that gives more flexibility and leeway.

Cheers
Orb
 
Guys, please think through whether anything constructive is being said here. Pure arguments are of little value. Please don't go after each other. Other members don't learn anything from that.

Well, you do get an insight into some people's personalities.
 
Although many magazines have similar systems - the Martin Colloms point system, the german Audio and Stereoplay, Stereophile classes, I have to say I disagree with such systems.

Unless you read the full review and understand what was behind the points, this system can lead to false conclusions and judgements. Also, as general equipment performance has been improving with time , and our criteria to assess performance also changed, some bizarre things can happen. One example of this are the current Martin Colloms lists, freely available at the HifiCritic site.

Ah I just thought of one area this usually breaks down and seems to be handled by Hifinews differently.
I may be totally wrong but to me it seems most other points based marks adjust points based upon price and the perception of it being high-end/mid-fi (a poor term I agree as technically a product is either high-end or not no matter the price).
In Hifinews it is just as possible for a moderate priced product to get above 85% and this has happened a few times, which makes it comparable to the expensive high end.
While often due to the high standards expected for the higher mark many product reviewed are below 75%, whether it is cheap or expensive.
As far as Hifinews is concerned in terms of sound quality/design it should be comparable, my mention of price earlier should had been explained that it is marginal weighting but possibly applied positively more to the cheaper products as they are closer to value for money.
Of course this is pure speculation on my part and maybe price is not a factor as a positive weighting for some reviewed products.

Cheers
Orb
 
From memory,
I think 90% is the highest given out so far, and that is for 3-5 products total.
89% is pretty exceptional as well.
From what I can tell as a subscriber the % is a combination of the subjective review, engineering design-build-quality-etc, measurements-performance, and possibly price.

And you would need to write into Hifinews for an explanation on what would be needed to hit above this, but as mentioned not many products hit 89% and much less for 90%.
Cheers
Orb

These scoring systems are possibly the best we can have for the money (if any) we are prepared to pay for our hifi magazines - but they are flawed in so many areas.

In the UK, we see at least three types of scoring system in the magazines: HiFi World "Globes" - running from none to five; HiFi News percentages - running from 0% to 100%; and HiFi Critic's Colloms' points - open ended but currently running from zero to about 175.

One fundamental flaw is that it is not always clear, to the casual reader, in the HiFi World and HiFi News cases, whether the scoring is price adjusted -0 in other words do two identically rated products score "X" regardless of price or does the cheaper one score more than "X" and the more expensive one less than "X"? It should be the former, of course, because how can a reviewer judge a possible customer's propensity to spend disposable cash?

Another flaw - with the HiFi World and HiFi News ratings - is that the scale is absolute - in numeric terms everything has to be within a 0-5 or a 0-100 scale. Probably for many reasons, some of which may be commercial, I guess that, in HiFi World, 60% of kit gets a 4 globe rating with 20% getting 3 globes and 20% getting 5 globes. Hardly discriminatory! And HiFi News is much the same with around 80% of reviews falling in the 75% to 85% band, or something similar.

The HiFi Critic system is better because it seems to be a relative rating - which I guess is the reason that Martin Colloms made it open-ended. I'm not sure how Martin operates this system but I suspect that, if he is reviewing a product which he rates as the best he has ever heard, his rating method will be something along the lines of "just a bit better than my previous best in one or two areas", with steps up to "materially better than my previous best in many areas" - with this correlated to a numeric system running from, say, 10% better than previous to, say, 50% better than previous.

As an example, the rating for the MSB Platinum 200 power amplifier, a recent top scorer, is 145 points. Merely to make the point, that rating might have been derived by a judgement "materially better than my previous best in many areas" for an earlier "best in class" power amplifier which scored 98, so MC then scores the MSB at 145. Equivalently, the 98 scorer might have superceded something scoring 66, and so on with scores of 44, 29, 19, 13 and 8.5 - that latter score corresponding to the score for the Quad 606, a product which emerged nearly 25 years ago, and is still available in updated form.

Is the MSB Platinum 200 power amplifier (scoring 145) 17 times as good as the Quad 606 (scoring 8.5)? Few would claim that - but I guess Martin would claim that over that period of almost 25 years, he has been able to identify "best in class" equipment that has made seven steps, each "materially better than my previous best in many areas".

Of course, if we wanted to bring some common sense to the MC ratings, which we then might not like, we could adjust the maximum perceived "50% better" down to "5% better" - which would then mean that, whilst the Quad 606 might stay at 8.5 points, the MSB Platinum 200 would rate at only 12 points. Which would hardly reinforce our spending £12,000 or so on an MSB when we can pick up the Quad 606 successor (the 909) for £900.

I think another strength of the Colloms ratings, if I understand them correctly, is that there isn't claimed to be a natural "read-across" between different component categories. So a CD player rated "50" should be compared only with a CD player rated 40 (or whatever) and not with a pre-amp rated 50 or 40. Furthermore, there are no numeric ratings for loudspeakers which I think is a wise move.

So I think this is a scoring system which is the best relatively but cannot be claimed to be the best absolutely because I doubt that exists.

Firstly I think it is naive to suggest that one can capture a product quality in one numeric rating. However, amongst us we would argue over defining even two or three different criteria - say (1) physical and mechanical attributes (2) power and pace, rhythm and timing and (3) transparency and imaging, as one example.

In addition, reviews should be undertaken by two or three people, independently, with their views published blind. And they should be carried out over a long period so that short-term personal feelings do nor bias the result. On top of that, how should we discipline the selection of ancillary equipment?

One more point: how often do magazines go back and rereview extant equipment two or three years after the original review? Very rarely - and usually never. So how do we really get a picture of how a Quad 606 compares with an MSB Platinum 200?

Of course, even if the majority agreed with these ideas, would we pay the increased magazine price which would be one consequence?

IanG
 
Last edited:
LOL and for me I find Hifi news to be the most consistent and best, goes to show how even this is difficult for magazine publishers let alone how the actual review is described :)
One thing I like with Hifinews is that its % scale seems highly consistent and 85% is excellent whether it is a cheap product or an expensive one, maybe this consistency is more of a recent thing (say since last 3-5 years).
Hifiworld I find the most difficult to equate and differentiate between average-good-excellent when also considering price.

I subscribe to these and a few others (for quite a few years now), but I must admit I do not to Hifi Critic, so I could be missing something there I agree.
Cheers
Orb
 
There is a recent review online that has the following sentence in it: "Everyone else uses off-the-shelf crossover programs that are designed to optimize a single parameter, typically phase or frequency response."

So the company whose product is under review is the only one that has developed their own crossover-modeling software? I don't even beleive the company under review states this. But even so, the reviewer said it. I guess the PhDs at Paradigm's Advanced Research Facility or the folks at Harmon just download the latest software at DIYAudio.com and go with that.

Really, this is where some editorial oversight has to come in.
 
LOL and for me I find Hifi news to be the most consistent and best, goes to show how even this is difficult for magazine publishers let alone how the actual review is described :)
One thing I like with Hifinews is that its % scale seems highly consistent and 85% is excellent whether it is a cheap product or an expensive one, maybe this consistency is more of a recent thing (say since last 3-5 years).
Hifiworld I find the most difficult to equate and differentiate between average-good-excellent when also considering price.

I subscribe to these and a few others (for quite a few years now), but I must admit I do not to Hifi Critic, so I could be missing something there I agree.
Cheers
Orb

Orb

HiFi Critic is worth a try - but it appears only quarterly and at £16 per issue it is not cheap. No advertising!

You do get established reviewers - Colloms, Messenger and Bryant - and some interesting peripheral articles, on starting out with computer audio, for instance.

I'd say it has a trace (but only a trace) of bias towards reviewing Naim and Absolute Sounds stuff - probably because Martin Colloms has good connections with them or their dealers. The methodology seems thorough (a bit like Paul Miller) and drivel is minimised, unlike the situation with some reviewers in some monthly magazines.

IanG
 
There is a recent review online that has the following sentence in it: "Everyone else uses off-the-shelf crossover programs that are designed to optimize a single parameter, typically phase or frequency response."

So the company whose product is under review is the only one that has developed their own crossover-modeling software? I don't even beleive the company under review states this. But even so, the reviewer said it. I guess the PhDs at Paradigm's Advanced Research Facility or the folks at Harmon just download the latest software at DIYAudio.com and go with that.

Really, this is where some editorial oversight has to come in.
Yeah at best they should had said most or common, it is unfortunate discussing such things in a sentence or two does not work and really needs a seperate box that some publications use to describe the technology-design-factors-etc.
It is like criticising manufacturers for using SPICE in a generic form without expanding on the benefits of what the bespoke program has in context to the product design under review.

Cheers
Orb
 
Having worked for years under a points-score system, it's worth pointing out the difficulties such a system creates from the end user. All these questions are pretty much identical to the ones you get when working a points system:

Is an 85% product at $1000 better or worse than a 90% one at $2000?
I selected all my products on the basis of their five-star sound quality and value ratings. I now have a $300 Blu-ray player, a $5000 30W tube amp into $500 82dB sensitive bookshelf loudspeakers in a room 24x36x11. What went wrong?
Is last year's 80% equivalent to next year's 82%?
Why are there no 100% products? I don't want to settle for 98% good!

My biggest dislike with a star rating is that it creates a star system. A lot of the best products would never make a five-star grade by virtue of price, availability, compatibility, etc. The HRT Streamer I mentioned earlier is a remarkable product, but would never achieve that desired 5/5 score because it doesn't include S/PDIF inputs. And, no matter how many times you try to explain the concept, people still view these things as 'five star - good, anything less - worthy of the scrap-heap'. The fact that a random group of five-star products does not automatically create a good sounding end result is forgotten as often as it's stated in print.

That being said, I don't think the no-rating alternative works especially well either, as it leaves people 'in the wind'. It works especially poorly at the lower to middle ends of the market where people want guidance as well as reviews. No-rating reviews appeal to high-enders because, however, because they want their buying decisions informed - but not dictated - by the review. Buying a five-star $300 amplifier equates to 'making an informed decision' where buying a five-star $3000 amp equates to 'following the pack'.

There has to be a better way. I like Stereoplay's system of classes and ratings within those classes. You know where you stand with a 'Very Bad', 'Bad', 'Average', 'Good', or 'Very Good' rating in the context of a 'Basic', 'Standard', 'Reference' and 'Absolute Reference' set of classes. It's still prone to difficulty and confusion (is 'Very Good/Basic' better or worse than 'Average/Standard'?), but it seems to give a set of strata to performance.
 
To echo Alan's eloquent post above, my biggest complaint with the star and numerical grading systems is that it confers a false precision to subjective opinion. Leads us to fall victim to the 'tyranny of numbers'. (BTW, there is a great book by that name by Peter Boyle.)

My impression is that publications really only have the resources to formally review equipment that they deem worthy of recommendation or popular products that command their reader's attention. Therefore, I would suggest a couple of ways to improve communication to the end user:

1. I would love for publications to list equipment that is submitted for review, auditioned, but not formally reviewed. This would eliminate the manufacturer's incentive to submit products for review knowing that they have a free pass if the said product does not meet the publication's admittedly subjective criteria of 'quality'. It would satisfy the complaints that reviewers don't 'criticize' a product while saving reviewers the difficulty of detailing the non-reviewed equipment's perceived deficiencies.

2. I find it helpful when reviewers compare the sound of equipment to similar priced products and if appropriate to cheaper alternatives. Michael Fremer's recent Ypsilon line stage review compared to his reference DartZeel. It was clear to the reader that Michael liked both and exchanging one for the other provided a different, albeit not necessarily better, sound. This is useful information for the user contemplating purchase. Comparison to other similar sounding equipment (if it is within the reviewer's experience) would be even more helpful.
 
Last edited:

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu