The Absolute Sound (magazine) take on many aspects of computer assisted music reprodu

As I posted on that forum, if two "identical" files with the same playback chain sound different, then either the files aren't identical, or the listening test methodology is flawed. Are there other possibilities I'm missing?
 
As I posted on that forum, if two "identical" files with the same playback chain sound different, then either the files aren't identical, or the listening test methodology is flawed. Are there other possibilities I'm missing?
Having read the articles, I will have a lot more to say later :). For now, the comments there were quite nice and folks did pick up on one of the key arguments against the findings.

From my read of the response, they say that different files will cause differences in jitter and unknown factors that cause audibility differences in identical files. In theory, the former can be true in that the behavior of the operating system and hence, the hardware activity will indeed be different when reading one file vs the other. Since these guys used the on-board digital out, chances of that signal getting polluted is high enough as to justify this theory.

But here is the problem in their case. If we are to believe that fidelity changes as we play one file vs another, then doesn't that invalidate all the testing they did? I mean if that is the case, then the performance of the fixture (PC + outboard DAC) is so variable as to make any kind of testing with different files impossible. After all, if the mere fact of moving the bits from one place on hard disk to another makes the sound different, maybe that is why the lower sampling file sounded worse than higher sampling! One can't buy into one argument and not the other. Ditto for their testing of different sample rates in JRiver. Maybe all of that is due to CPU usage changing and nothing to do with any fidelity differences due to sampling rate. They could have kind of ruled that out by changing the CPU load arbitrarily and seeing if that changes audio.
 
... If we are to believe that fidelity changes as we play one file vs another, then doesn't that invalidate all the testing they did? I mean if that is the case, then the performance of the fixture (PC + outboard DAC) is so variable as to make any kind of testing with different files impossible. After all, if the mere fact of moving the bits from one place on hard disk to another makes the sound different, maybe that is why the lower sampling file sounded worse than higher sampling! One can't buy into one argument and not the other. Ditto for their testing of different sample rates in JRiver. Maybe all of that is due to CPU usage changing and nothing to do with any fidelity differences due to sampling rate.

Indeed this has been my primary concern from the beginning. Although TAS is not as influential as it once was, I still think it's completely irresponsible for a well-known specialty (print) mag to publish something like this. Among other things, it seems to be trying to undermine the whole foundation of computer audio by implying that 2 identical computer audio files may sound different for undetermined reasons.

And again as Amir says, if the testing methodolgy is flawed (and it must be to have the findings reported in this post), then none of their conclusions can be considered valid, even though some probably are.


... They could have kind of ruled that out by changing the CPU load arbitrarily and seeing if that changes audio.

Well, they claim to have controlled for that by using two different computer systems, but...
 
I(...) Although TAS is not as influential as it once was, I still think it's completely irresponsible for a well-known specialty (print) mag to publish something like this.

Rbbert,

I have to disagree on this point. TAS is not IEEE. Even AES, as some of our members explained before is not a peer reviewed publication.

The first part of the article explained the how´s and why's of the article and its the limits. The TAS connected Avguide forum opened its pages to an open discussion with one of the authors, where polemic points are being discussed. It is more than I can expect for such publications.

This paper tries to analyze and opens the discussion on many aspects related to music servers, that are kept hidden from debate until now, or at less not so openly. It is not a prescription book, perhaps an introduction more clearly stating it would be welcome. Also, being published along several separate issues can make its full comprehension more difficult. But I think all interested WBF members are taking good profit of it. :)

Surely not everyone will agree with me. But as I pay for my subscription of TAS, I hope they keep publishing this type of articles.
 
This whole business is yet another example of all systems being the sum totals of their parts. In music playback, at the time of the actual generation of the acoustic audio signal, every part of the mechanism responsible for that process, especially those including electronic components, IS part of audio system. That means, including the computer, or music server. Electronics in themselves know nothing of the nice little boundaries we, as outside observers, place upon them, as in that one lump of electronics is computer, digital stuff, and the other lot is nice, human understandable, analogue audio circuitry. No, as far as electrons and electromagnetic waves are concerned it's all one nice continuum. To summarise, everything affects everything, nothing is truly isolated from something else; it's always a matter of degree, not absolutes.

So, the earlier post was correct to say the differences were due to defects in the playback chain. That is 100% correct. But it was not correct to say such things are audiophile nonsense ...

Frank
 
...
So, the earlier post was correct to say the differences were due to defects in the playback chain. That is 100% correct. But it was not correct to say such things are audiophile nonsense ...

Frank

I don't see where anyone called it "audiophile nonsense". What I'm saying (is the problem) is that defects in the playback chain ---were not even considered--- as possible explanations for the observed phenomena. Not in the original article, nor in the avguide web responses.
 
...This paper tries to analyze and opens the discussion on many aspects related to music servers, that are kept hidden from debate until now, or at less not so openly. It is not a prescription book, perhaps an introduction more clearly stating it would be welcome...

I don't think these points are at all hidden. There are several websites (e.g., computeraudiophile.com. audiostream.com, many others) devoted to just such issues. To me, this series of articles just opens the door to audiophile tomfoolery instead of reasonable investigation of computer audio (sorry for the repetitive posting, but I feel strongly about it)
 
Indeed this has been my primary concern from the beginning. Although TAS is not as influential as it once was, I still think it's completely irresponsible for a well-known specialty (print) mag to publish something like this. Among other things, it seems to be trying to undermine the whole foundation of computer audio by implying that 2 identical computer audio files may sound different for undetermined reasons.

...which would justify touting computer hardware for the 'golden ear' market, natch. Right up audiophile alley.

And again as Amir says, if the testing methodolgy is flawed (and it must be to have the findings reported in this post), then none of their conclusions can be considered valid, even though some probably are.

Not just that. If their statistics and controls aren't adequate to the task, the reports are dubious at best. 'Take my word for it, I'm a trained listener and my gear is the best' won't do.
 
Last edited:
Rbbert,

I have to disagree on this point. TAS is not IEEE. Even AES, as some of our members explained before is not a peer reviewed publication.
JAES is peer reviewed. AES convention papers and posters aren't.

This paper tries to analyze and opens the discussion on many aspects related to music servers, that are kept hidden from debate until now, or at less not so openly.

'Hidden'? By whom? You might be surprised to learn that such issues have been dissected before, albeit on more 'skeptical' audio sites than perhaps you are familiar with.


fas42 said:
But it was not correct to say such things are audiophile nonsense ...

Obviously I beg to differ. 'Everything affects everything' makes it sound like audio systems are hothouse flowers negatively affected by the slightest change. But that renders the concept of 'tolerances' meaningless. Evidence suggests the contrary: that there are measurable variations that don't have any significant effects on system performance.
 
I don't see where anyone called it "audiophile nonsense". What I'm saying (is the problem) is that defects in the playback chain ---were not even considered--- as possible explanations for the observed phenomena. Not in the original article, nor in the avguide web responses.
The "defects" I talk of are that the electronics on the analogue side are affected by what's occurring in the environment, specifically the type of processing activity on the server. Of course it shouldn't, therefore it is a "defect" that there is such an occurrence.

Just wanting to believe that one lot of electronics has no impact on another lot, because they "theoretically" are not connected in a meaningful way, is not going to work in making misbehaviour or unexpected interactions go away. If there is an observed behaviour, then there should be complete openness in considering what may be causing the situation ...

Frank
 
Obviously I beg to differ. 'Everything affects everything' makes it sound like audio systems are hothouse flowers negatively affected by the slightest change. But that renders the concept of 'tolerances' meaningless. Evidence suggests the contrary: that there are measurable variations that don't have any significant effects on system performance.
Unfortunately, very unfortunately, that is exactly the case. A car analogy goes nicely here: a bog standard family sedan will do the job nicely of carting the family around the suburbs, with the engine out of tune, any old tyres will do, etc, etc; but get your decent Ferrari, put the hammer down, and the slightest mismatching of tyres or flat spot in engine performance will be screamingly obvious.

In other words, the more ambitious the enterprise, the more glaringly apparent will be every minor imperfection ...

Frank
 
Last edited:
(...) 'Hidden'? By whom? You might be surprised to learn that such issues have been dissected before, albeit on more 'skeptical' audio sites than perhaps you are familiar with.
.

Thanks. Can you point us some direct links or references to articles that debate the most controversial aspects in this article in a systematic way you consider valid?
BTW, what do you mean by "skeptical" audio sites?
 
... the most controversial aspects in this article...?

There isn't likely to be much discussion about how two identical audio files can sound different, which is likely the single most controversial assertion. But there's a fair amount of discussion about the upsampling on www.computeraudiophile.com
 
There isn't likely to be much discussion about how two identical audio files can sound different, which is likely the single most controversial assertion. But there's a fair amount of discussion about the upsampling on www.computeraudiophile.com

Just clicked on the link you kindly supply. I immediately read in the opening page a review of a DAC by the forum founder:

"I'm not a big fan of blind listening tests. I rarely put myself through blind tests when reviewing products. My standard reviewing methodology is to listen to familiar components for a few hours, or even days, then place the piece of reviewed gear into the system. "

Then I googled a bit in the site looking for "blind test" and mostly found posts carrying "but it was not a blind test " or similar , but not a single post with results that could be considered coming from a valid DBT. Perhaps such posts and tests exist in the forum, and I could not find them. It was in this sense I wrote "hidden" in a previous post.
 
In my fairly extensive experience valid DBT's evaluating audio equipment are so rare as to be virtually non-existent, so if that's what you're looking for, good luck finding anything.

There does seem to be a developing Web consensus that Dr. Zeilig's series of articles contains numerous "fatal flaws" and (as Gordon Rankin of Wavelength Audio put it) "non-credible" results.
 
Unfortunately, very unfortunately, that is exactly the case. A car analogy goes nicely here: a bog standard family sedan will do the job nicely of carting around the family around the suburbs with the engine out of tune, any old tyres will do, etc, etc; but get your decent Ferrari, put the hammer down, and the slightest mismatching of tyres or flat spot in engine performance will be screamingly obvious.

In other words, the more ambitious the enterprise, the more glaringly apparent will be every minor imperfection ...

So often do audiophile haul out the auto analogy, and so often does it fail.

Out of tune, old tyres etc are hardly 'slight' changes. No one would dispute either the theoretical or practical reasons why they could negatively affect performance, of either a Ford or a Ferrari. And your assertion that a Ferrari is vastly more sensitive to them is just that. Certainly a Ferrari can perform differently than another model. But under what conditions are we talking about? Both cars driven at their performance limits ('hammer down')? Is that a fair comparison? And what level of change does 'slight mismatching of tyres' really signify, quantitatively? I would posit that there is a level of 'slight' which would be unnoticeable in either a Ferrari or a Ford.

The better analogy to the audio under discussion would be more like, reporting a perceivable performance increase due to using a different car wax. Or a different tyre inflator plug. Will you notice that more in a Ferrari than a Ford?

The plain fact is, not every variation matters to performance in audio -- or in cars. If your audio system is really such a 'hothouse flower', I would suggest you replace it with something more robust.
 
Last edited:
Thanks. Can you point us some direct links or references to articles that debate the most controversial aspects in this article in a systematic way you consider valid?
BTW, what do you mean by "skeptical" audio sites?

The Hydrogenaudio.org forum has discussed the 'flac sounds different from wav' fallacy numerous times, among many other audiophile fallacies. It's discussing this article right now.

A skeptical site is one that views claims that appear to fly in the face of known science (including psychoacoustics) and technology with a critical eye. It does not accept 'sighted' reports as reliable or even credible without some objective supporting evidence.

btw, i don't consider 'computeraudiophile' to be a skeptical site...the owner/moderator has championed some utter audiophile nonsense (which has been dissected at ha.org), and considers sighted evaluation to be trustworthy (at least, when *he* does it).
 
The Hydrogenaudio.org forum has discussed the 'flac sounds different from wav' fallacy numerous times, among many other audiophile fallacies. It's discussing this article right now. (...)

Thanks again. Again I opened the link you provide. There is a stating thread about the TAS article.
Opening sentence of the opening post slashing the article:

I haven't read the TAS article, but I followed up online and found one of its authors responding to queries about it, on another forum.


I will pass. :)
 
A skeptical site is one that views claims that appear to fly in the face of known science (including psychoacoustics) and technology with a critical eye. It does not accept 'sighted' reports as reliable or even credible without some objective supporting evidence.
The article we are discussing is based on blind tests. So that remark doesn't apply. Indeed, that is what makes this report interesting in that it starts off with very high level of objectivity yet arrives at conclusions that are normally associated with not-only sighted testing, but wishful thinking! :) I will post more on this but first, I have to scan a chart they have or else, it is hard to show that. It is a shame the article is not online anywhere to quote and comment :(.

I do suggest people read the articles before commenting. That, I think is the objective method of evaluating their claims ;) :).
 
So often do audiophile haul out the auto analogy, and so often does it fail.

...

The better analogy to the audio under discussion would be more like, reporting a perceivable performance increase due to using a different car wax. Or a different tyre inflator plug. Will you notice that more in a Ferrari than a Ford?

The plain fact is, not every variation matters to performance in audio -- or in cars. If your audio system is really such a 'hothouse flower', I would suggest you replace it with something more robust.
I was replying to the "hothouse flowers" comment in the context of the level of performance seeked, and expected of the system; not some seemingly bizarre connections between ability and conditions. No-one expects an ordinary family car to operate to high standards, but everyone would be disappointed if the Ferrari was ill-mannered in significant ways. So for an audio system: if used to supply background music, no expectations; but if the volume is wound up to realistic, natural levels, and you're listening to how well the qualities of a solo violin or solid drum kicks are reproduced, then the slightest perturbations in the tonal qualities will be very audible, and disturbing if negative in quality.

So the argument falls back, as you point out, to which variations are relevant. And the two sides of the fence are those who claim seemingly nonsensical attributes matter, and those who want measurement "proof" for everything and discount all that is not readily amenable to such an approach. The answer, as always, is in the middle, and I favour closer to the first lot than the second, for the reason that I highlighted the key word: the explanations for why something is relevant are often "wacko", but that doesn't invalidate the "truth" that there is an effect, the "problem" is that there still doesn't exist a full understanding of cause and effect in the matter...

Frank
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu