From memory,
I think 90% is the highest given out so far, and that is for 3-5 products total.
89% is pretty exceptional as well.
From what I can tell as a subscriber the % is a combination of the subjective review, engineering design-build-quality-etc, measurements-performance, and possibly price.
And you would need to write into Hifinews for an explanation on what would be needed to hit above this, but as mentioned not many products hit 89% and much less for 90%.
Cheers
Orb
These scoring systems are possibly the best we can have for the money (if any) we are prepared to pay for our hifi magazines - but they are flawed in so many areas.
In the UK, we see at least three types of scoring system in the magazines: HiFi World "Globes" - running from none to five; HiFi News percentages - running from 0% to 100%; and HiFi Critic's Colloms' points - open ended but currently running from zero to about 175.
One fundamental flaw is that it is not always clear, to the casual reader, in the HiFi World and HiFi News cases, whether the scoring is price adjusted -0 in other words do two identically rated products score "X" regardless of price or does the cheaper one score more than "X" and the more expensive one less than "X"? It should be the former, of course, because how can a reviewer judge a possible customer's propensity to spend disposable cash?
Another flaw - with the HiFi World and HiFi News ratings - is that the scale is absolute - in numeric terms everything has to be within a 0-5 or a 0-100 scale. Probably for many reasons, some of which may be commercial, I guess that, in HiFi World, 60% of kit gets a 4 globe rating with 20% getting 3 globes and 20% getting 5 globes. Hardly discriminatory! And HiFi News is much the same with around 80% of reviews falling in the 75% to 85% band, or something similar.
The HiFi Critic system is better because it seems to be a relative rating - which I guess is the reason that Martin Colloms made it open-ended. I'm not sure how Martin operates this system but I suspect that, if he is reviewing a product which he rates as the best he has ever heard, his rating method will be something along the lines of "just a bit better than my previous best in one or two areas", with steps up to "materially better than my previous best in many areas" - with this correlated to a numeric system running from, say, 10% better than previous to, say, 50% better than previous.
As an example, the rating for the MSB Platinum 200 power amplifier, a recent top scorer, is 145 points. Merely to make the point, that rating might have been derived by a judgement "materially better than my previous best in many areas" for an earlier "best in class" power amplifier which scored 98, so MC then scores the MSB at 145. Equivalently, the 98 scorer might have superceded something scoring 66, and so on with scores of 44, 29, 19, 13 and 8.5 - that latter score corresponding to the score for the Quad 606, a product which emerged nearly 25 years ago, and is still available in updated form.
Is the MSB Platinum 200 power amplifier (scoring 145) 17 times as good as the Quad 606 (scoring 8.5)? Few would claim that - but I guess Martin would claim that over that period of almost 25 years, he has been able to identify "best in class" equipment that has made seven steps, each "materially better than my previous best in many areas".
Of course, if we wanted to bring some common sense to the MC ratings, which we then might not like, we could adjust the maximum perceived "50% better" down to "5% better" - which would then mean that, whilst the Quad 606 might stay at 8.5 points, the MSB Platinum 200 would rate at only 12 points. Which would hardly reinforce our spending £12,000 or so on an MSB when we can pick up the Quad 606 successor (the 909) for £900.
I think another strength of the Colloms ratings, if I understand them correctly, is that there isn't claimed to be a natural "read-across" between different component categories. So a CD player rated "50" should be compared only with a CD player rated 40 (or whatever) and not with a pre-amp rated 50 or 40. Furthermore, there are no numeric ratings for loudspeakers which I think is a wise move.
So I think this is a scoring system which is the best relatively but cannot be claimed to be the best absolutely because I doubt that exists.
Firstly I think it is naive to suggest that one can capture a product quality in one numeric rating. However, amongst us we would argue over defining even two or three different criteria - say (1) physical and mechanical attributes (2) power and pace, rhythm and timing and (3) transparency and imaging, as one example.
In addition, reviews should be undertaken by two or three people, independently, with their views published blind. And they should be carried out over a long period so that short-term personal feelings do nor bias the result. On top of that, how should we discipline the selection of ancillary equipment?
One more point: how often do magazines go back and rereview extant equipment two or three years after the original review? Very rarely - and usually never. So how do we really get a picture of how a Quad 606 compares with an MSB Platinum 200?
Of course, even if the majority agreed with these ideas, would we pay the increased magazine price which would be one consequence?
IanG