Although as I kept pointing out there IS similarities between MQA philosophy and how active speakers actually correct phase-"timing" issues at both extreme frequency range (for speakers more prominent with bass though and especially those with port reflex) and at the crossover - I think I explained my reasoning reasonably often.First, as I recall the deviation started with Orb claiming a superior sound from active speakers, my response to that, then Don chiming in with his opinions.
in my demo they did play Roberta Flack's 'Killing Me Softly' with and without MQA.......according to what they told us. it sounded better but so what. it was all staged so everything they said and we heard required faith in Bob 'PCM' Stuart.
I have less than zero faith in Bob Stuart's future music format vision.
at best my demo was a weak data point. at worst it was a fraud. who knows? time will tell.
I too would be surprised if they had done a real MQA vs not demo. At least I wouldn't give how subtle the difference might be .I believe it was a comparison between a low bit rate mp3 file and MQA. I am not away of any MQA demo between a high rez and an MQA track taken from the same master.
Well, first thing is terminology. We are in a highly technical topic and I am trying to guide us to use the right terms. MP3 files are created using software. The name of the software is called an Encoder. The thing that converts the bit stream back to PCM is called the decoder. There is no engineering involved as I explained as that term is reserved for hardware and or hardware/software combination. It is also not a recording because this is not a recording. The right way to ask your question is, "are you saying there is no such thing as an inferior MP3 encoder?"Amir, are you saying there's no such thing as an inferior-engineered MP3 recording? I realize it's down-converted to MP3 format, but somebody still has to engineer that process, right? If 2 different companies where to down-convert the same recording to MP3, does not one outcome stand to possibly be superior or inferior to the other? What am I missing here?
According to high-end audio mantra, the situation is inverse. I am using whatever 50 cent DAC on my laptop and went as low as a $100 in-ear monitor for my testing. Neither of these are considered high-end relative to home systems people use. The headphones do help though is blocking out noise.As for your hearing tests, as I recall these tests were all done on your computer, correct? Not to say the results wouldn't be the same, but that's not same as performing the tests on your playback system where it really counts. And to the best of my knowledge your playback system is going to induce potentially far more distortions (much higher noise floor) than your desktop or laptop.
That is just not true. Distortions in the electronics are well below anything in your MP3 encodings. Let's remember that at 320 kbps we have thrown away 75% of the bits that was there prior to conversion. While we are very clever in picking the 25% that matters, corner cases do not allow such luxuries.That aside, whoever said I was ok with MP3 or its artifacts? If I said anything remotely close to that it was that I said something like, because of the distortions induced into our playback systems raising the noise floor to such a high level that a good percentage of even a well-engineered high-rez recording remains inaudible so that you and others may well be hearing no more music info content than what is contained in a "well-engineered" (think down-converted) MP3 recording (without any distortions masking any of the MP3's music info).
The type of distortions that audio compression creates is somewhat orthogonal to what audio equipment produces. Compression artifacts range from sound getting brighter to "pre-echo" which is the effect I talked about with respect to transients. Nothing in a home system undoes that kind of artifact. Yes, an overly live room with strong reflections may make it hard to hear quieter parts of transients but it is not something we would want to count on being real.
......
If you have critical enough hearing to value high resolution audio, then MP3 for sure, as a general statement, produces lower fidelity, not more as you state. No case can be made of it being superior in any manner since its job is to add distortion, not take it away from the system.
Oh yes, extensively. As cheats go, this was the biggest one in that Ogg ran only in variable bit rate mode yet everyone would compare it to fixed bit rate of other codecs. You can find conversations between Marty and I in online forums with me telling him this and him looking away as if it is just fine.Amir,
Did you ever do any testing at MS using the Ogg Vorbis codecs?
Application of lossy codecs to high sampling rates is a bizarre thing. While we also did that with WMA Pro, the science behind it is pretty sketchy. The way lossy compression works is that a psychoacoustic model is placed over the music in frequency domain. That model by definition puts no value on > 20 Khz since we don't hear those frequencies. As used that way, it would be the same as encoding at 44.1 Khz! What we and others did was place some random importance on ultrasonic frequencies with some kind of sloping down graph. But there was no way to validate that. It is not like you could play a 30 Khz tone and see what distortions people could hear. The source could not be heard and given that, the distortion at so many db below it would be inaudible just the same.Especially interested in whether or not Ogg has been compared using higher sample rates. Ogg will compress, and playback up to 192 khz sample rates.
The difference is that Ogg or other lossy high resolution codecs will treat the entire spectrum uniformly. They would actually confuse the ultrasonic noise for real content with high entropy (i.e. randomness) and as a result, try to steal bits from audible band to encode it. Nothing good comes out of that because you are trying to preserve ultrasonic content that we can't hear, at the expense of lower fidelity of in-band spectrum that we do hear. If the bit rates are high enough, the artifacts are likely not audible but again, the logic of it does not hold together.I have wondered with the claims of the benefit of hirez whether good testing would show 192 khz Ogg distinguishable from say uncompressed wav at 44 or 48 khz. Like MQA Ogg will save more space at ultrasonic frequencies when there is little content there. Yet it can recreate mostly what is in those ultrasonic ranges upon playback.
Oh yes, extensively. As cheats go, this was the biggest one in that Ogg ran only in variable bit rate mode yet everyone would compare it to fixed bit rate of other codecs. You can find conversations between Marty and I in online forums with me telling him this and him looking away as if it is just fine.
Application of lossy codecs to high sampling rates is a bizarre thing. While we also did that with WMA Pro, the science behind it is pretty sketchy. The way lossy compression works is that a psychoacoustic model is placed over the music in frequency domain. That model by definition puts no value on > 20 Khz since we don't hear those frequencies. As used that way, it would be the same as encoding at 44.1 Khz! What we and others did was place some random importance on ultrasonic frequencies with some kind of sloping down graph. But there was no way to validate that. It is not like you could play a 30 Khz tone and see what distortions people could hear. The source could not be heard and given that, the distortion at so many db below it would be inaudible just the same.
I remember being at Audio Engineering Society conference where a guy from FHG presented the same for MP3. Then someone asked him the same question of what models they used for ultrasonics, and he did not even have a talking point to utter!
The difference is that Ogg or other lossy high resolution codecs will treat the entire spectrum uniformly. They would actually confuse the ultrasonic noise for real content with high entropy (i.e. randomness) and as a result, try to steal bits from audible band to encode it. Nothing good comes out of that because you are trying to preserve ultrasonic content that we can't hear, at the expense of lower fidelity of in-band spectrum that we do hear. If the bit rates are high enough, the artifacts are likely not audible but again, the logic of it does not hold together.
What MQA does is to preserve the lower frequency spectrum as is, and fold in the high frequency spectrum into the low order bits of 24 bit samples which would be noise otherwise. So it seems like a more clever way to do this, much like HDCD did.
Oh yes, extensively. As cheats go, this was the biggest one in that Ogg ran only in variable bit rate mode yet everyone would compare it to fixed bit rate of other codecs. You can find conversations between Marty and I in online forums with me telling him this and him looking away as if it is just fine.
Application of lossy codecs to high sampling rates is a bizarre thing. While we also did that with WMA Pro, the science behind it is pretty sketchy. The way lossy compression works is that a psychoacoustic model is placed over the music in frequency domain. That model by definition puts no value on > 20 Khz since we don't hear those frequencies. As used that way, it would be the same as encoding at 44.1 Khz! What we and others did was place some random importance on ultrasonic frequencies with some kind of sloping down graph. But there was no way to validate that. It is not like you could play a 30 Khz tone and see what distortions people could hear. The source could not be heard and given that, the distortion at so many db below it would be inaudible just the same.
I remember being at Audio Engineering Society conference where a guy from FHG presented the same for MP3. Then someone asked him the same question of what models they used for ultrasonics, and he did not even have a talking point to utter!
The difference is that Ogg or other lossy high resolution codecs will treat the entire spectrum uniformly. They would actually confuse the ultrasonic noise for real content with high entropy (i.e. randomness) and as a result, try to steal bits from audible band to encode it. Nothing good comes out of that because you are trying to preserve ultrasonic content that we can't hear, at the expense of lower fidelity of in-band spectrum that we do hear. If the bit rates are high enough, the artifacts are likely not audible but again, the logic of it does not hold together.
What MQA does is to preserve the lower frequency spectrum as is, and fold in the high frequency spectrum into the low order bits of 24 bit samples which would be noise otherwise. So it seems like a more clever way to do this, much like HDCD did.
That is just not true. Distortions in the electronics are well below anything in your MP3 encodings. Let's remember that at 320 kbps we have thrown away 75% of the bits that was there prior to conversion. While we are very clever in picking the 25% that matters, corner cases do not allow such luxuries. Uncompressed audio has the advantage that it allocates constant number of bits to the content, independent of what is being encoded. Lossy audio codecs however, allocated wildly different amounts of data in each frame of audio. As such, when they generate distortion, it is highly content dependent. The content is modulating the fidelity which when audible, can be quite annoying. It is just not common to use MP3 and high-resolution audio in the same sentence. If you have critical enough hearing to value high resolution audio, then MP3 for sure, as a general statement, produces lower fidelity, not more as you state. No case can be made of it being superior in any manner since its job is to add distortion, not take it away from the system.
According to Bob, this technology has been in development for a few years and has fully incorporated field evaluations. This is from the AES Paper:IMO, this could explain why or how somebody like Stuart can innovate a new technology or format and be confident that he's got himself a winner. That is, so long as he stays in the lab comparing measured readings and algorithms there. This could also explain why Stuart claims with confidence, that with MQA we are finally able to hear exactly what the engineers heard in the studio. Because on paper and in the lab he might well be 100% accurate.
Audiophiles have beliefs that range from one extreme to another. As such, they do not present any reliable data point to argue from. So XYZ person doesn't like MQA. And another does. Neither is a data point for me. The data point is a controlled listening test presented, or one that I can conduct, with MQA being turned on and off. That is the only thing that matters right now in arguing the point with me.But by the time his technology is executed in the listening room, for some-to-many, like other formats, there is nothing overly special to hear. Which seems to be what’s happening from those who do not appear to have a stake in MQA’s success.
You continue to make remarks about someone you don't know. And speculating as you are doing here. Let's stick to what we know. If you know he does some of the above and something is wrong with it, then please explain it. Let's not make political statements about everyone who drives a red car.But even then, while some of us may consider our components as sensitive instruments, hence we install line conditioners, special power cables, outlets, fuses, plugs, etc. etc, or place our components on boxes of kitty litter, bicycle inner-tubes, sorbathane, etc., I've yet to come across an engineer or lab rat who thinks their the same way about their sensitive measuring instruments, analyzers, encoders, converters, etc. I'd have to venture that Stuart like the others in this regard and therefore, is simply working within the confines of his own sand box.
Sorry but the quote has no relation to the reality of this conversation . Bob is not a theoretician. He runs a successful high-end audio company that has been in business for 30+ years. And he has produced products with MQA in them already. He has investigated the theory, the psychoacoustics behind it, and implication in real products. The improvements at the end of the day may be too fleeting to matter but it is anything but what you quote from Tesla.Again, I quote Tesla, "Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.”
Sorry but can we stay on topic of MQA and Stuart? If you have a rant against reviewers of products in general, please make a thread on that topic.Case-in-point. John Atkinson’s recent endorsement of the Vandersteen Model 7A speakers....
Sorry but the quote has no relation to the reality of this conversation .
Thanks for the MP3 education, Amir. Like most anything, I just assumed there was potential variations of quality with MP3.
To the contrary, my comment here should be far more accurate than yours because the most severe and universal distortions that cause much of the music embedded in a given recording to remain inaudible (though read and processed) are generated in the components regardless of format. I said something like, “ all that you hear audibly from a high-rez recording info wise may well be not much different than the entire music info contents embedded in an MP3 track”. Ultimately I can only approximate the number of bytes in a given MP3 recording. On the other hand I can only offer a SWAG as to how much of the music info remains audible at the speakers after reading and processing nearly every last bit of a track that was embedded in say a Redbook recording due to a much raised noise floor induced by various distortions.
And this is where we really need to get back to fundamentals. For example, if Harley is at all accurate when he speculated something catastrophic is occurring somewhere in the music processing chain (Harley and Meitner think it’s at the recording mic diaphragm) or if Valin’s claim that even our very best playback systems are capturing only 15% of the “magic” of the live performance has any teeth whatsoever, what are you really hearing at your speakers’ output?
Now if I stopped there, you might come back and based on your knowledge of the digital process, clearly show me (on paper) how I and others are wrong. I can't show you on paper that you are wrong, but I can postulate that it is obvious you and others, e.g. Stuart, in spite of everything you think you know simply cannot be taking into account every pertinent aspect of what is happening between reading the music info embedded in the recorded medium and writing the music info (at the speakers).
Case-in-point:
1. Again, the 2 quotes I provided earlier by Harley and Valin, along with Atkinson of Stereophile who in the Sept. 2009 issue made a claim not too dissimilar to Harley’s. Remember that Harley used the word "catastrophic" with regard to the levels of music at least he hears. Valin speculated, "only 15% of the 'magic'" was being captured by the industry's best playback systems of 2008. Catastrophic is a powerful word and I doubt Harley intended it to be taken lightly. I have several friends who think Valin is too optimistic with his "15% of the magic" claim. And of course Valin is not claiming only 15% of the music is audible, but that’s another story. To the best of my knowledge these 3 have yet to rescind or recant these statements and I’m confident the reason they haven’t is because they are fairly accurate.
2. Several years ago I engaged in some shall we say meaningful dialogue with Mark Levinson and John Curl. Both admitted that even though their SOTA-level sensitive measuring instruments were professionally calibrated, the instruments still routinely failed them when they could hear differences the measuring instrument could not discern. They both said it was a common occurrence. Eventually Curl admitted that every one of his designs (and others' too) had at least 1 serious and unexplainable flaw that baffled most / all mfg'ers. I do not recall Levinson admitting to Curl's same serious flaw claim but Curl was speaking for all components and Levinson did not refute it.
3. There are other similar “catastrophic” like claims sporadically throughout the industry, though regrettably not near enough to raise awareness or concern.
The point being that results and findings in the lab, at the test bench, at the calculator, and/or in the science books are one thing, but ultimately it is the end result at the speaker that matters. And what should be obvious to anybody who intimately knows live music who also intimately knows the limitations of even a SOTA-level playback system, should also realize there is a huge (think catastrophic) gulf that separates the two.
What does catastrophic translate to in percentage of all the bits read and processed from a recording that fall below a much raised noise floor? A 5%, 10% reduction? Doubtful. Catastrophic being a very strong word (def. causing great damage, extremely unfortunate), I'd venture from that definition alone the amount of music that remains inaudible has to be somewhere around 35 - 60% or more of the music remaining inaudible at the speaker.
Based on my own experience of recovering much of this inaudible music info, I'd guess the amount of music remaining inaudible has to be at least around the 50 - 60% mark. Which, from my earlier point, is not all that far from the 75% of the music info you said was discarded during the encoding process to produce an MP3 track.
Not that it matters here, but with something like Harley’s catastrophic claim in mind (assuming there’s truth to it), maybe now it’s a bit easier to understand when at least some claim that the piano is the most difficult instrument to accurately reproduce or that opera or choral music is the most tortious of all music genre to listen to on a given system causing breakup and flattening out, or why some systems seem to fall apart when playing dynamic or complex material, or why so many people have tried to go down the multi-channel path because their 2-ch system wasn’t even close to presenting a 3-D presentation. How about a sharp note in the upper registers of a percussive instrument like a piano can can be borderline overwhelming even if the note is heard in its entirety. How do you think that sharp note would sound if you strip away say 50% of its less prominent information and suddenly that note is coming down off the soundstage and making like a laser beam straight for your ear. This catastrophic percentage that remains inaudible actually explains much for all the coveted attributes and characteristics we seek in reproduced music but cannot hear without a stethoscope and our own reference material that we must be so intimately familiar with. This may also explain why the audio community has learned long ago, to keep the listening volume significantly lower than live performance levels. i.e. nobody's going to wince or get bleeding ears from elevator music.
IMO, this could explain why or how somebody like Stuart can innovate a new technology or format and be confident that he's got himself a winner. That is, so long as he stays in the lab comparing measured readings and algorithms there. This could also explain why Stuart claims with confidence, that with MQA we are finally able to hear exactly what the engineers heard in the studio. Because on paper and in the lab he might well be 100% accurate. But by the time his technology is executed in the listening room, for some-to-many, like other formats, there is nothing overly special to hear. Which seems to be what’s happening from those who do not appear to have a stake in MQA’s success.
But even then, while some of us may consider our components as sensitive instruments, hence we install line conditioners, special power cables, outlets, fuses, plugs, etc. etc, or place our components on boxes of kitty litter, bicycle inner-tubes, sorbathane, etc., I've yet to come across an engineer or lab rat who thinks their the same way about their sensitive measuring instruments, analyzers, encoders, converters, etc. I'd have to venture that Stuart like the others in this regard and therefore, is simply working within the confines of his own sand box.
Again, I quote Tesla, "Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.”
I would attest that is exactly what is happening here. And I’m not even thinking about Bob Stuart so much as I’m thinking of all the "music lovers" and “audiophiles" who’ve convinced themselves they’ve had to become engineer-types and digital experts just to appreciate music from a well-engineered high-rez recording. And if one can’t understand or chooses not to go down that rabbit hole then they must not know what they are listening to.
Case-in-point. John Atkinson’s recent endorsement of the Vandersteen Model 7A speakers, “made the hairs on the back of my neck stand up…., musically perfect…., across the board.” Taken on its face, from that one endorsement Atkinson revealed to the entire industry he doesn’t know what the frick he’s talking about when it comes to discerning what he hears. But since he’s such a measuring weenie and bit fiddler dealing with some of the technical aspects of music production and playback, nobody seems to care that he may well have tin ears, simply because in other aspects he comes across as intelligent.
BTW Tbone, you can turn in your “audiophile” credentials now.
It is very eloquently put and describes my endeavors and ideals in this field just the same.Adding fuel to the fire? Answers another thread too.
From Bob Stuart today entitled "Who Cares?"
From time to time we come across audio 'experts' who 'know' that AAC/MP3 is 'good enough for the masses', using arguments like: 'people do most of their listening on the train'; 'these days music is just the background'.
These same 'experts' get very 'edgy' if you suggest CD wasn't perfect. The barrage begins: M&M, ABX, 'snake oil', 'marketing', 'Nyquist knows best', 'vested interest',-followed by appeals to outdated psychoacoustics or retreats into 'it doesn't matter in the real world'.
It's sad, but we shouldn't mind if someone doesn't care whether a recording is well made, delivered and played back. But I mind a lot if they try to dismiss such efforts as 'self-serving', 'misleading', 'delusional', or 'exploitative'. That is the true arrogance of the ignorant. Worse, if by doing so the landscape tilts and a generation of innocent miss out on the opportunity for better sound.
We need the recording industry to be sustainable. the companies that developed lossy codecs spent real money doing so; the telecoms poured huge resources into reducing bitrate to the lowest tolerable sound intelligibility. Is it somehow nobler to lower sound quality than to seek higher quality?
I've devoted a lot of time of understanding and making sound better, to enable better recordings that remove veils between the performer and the listener-all in the interest of deeper communication, enjoyment and preservation of the music.
The enduring insight is that our hearing is exquisite; it's robust (we can always recognize the tune in bizarre circumstances) but it also 'knows the real thing' right away.
Very few make the effort to get is as good as it can be. But it still drives my research. The good news is that so many experienced listeners and musicians agree.
Anyone else?
Thanks for the MP3 education, Amir. Like most anything, I just assumed there was potential variations of quality with MP3.
To the contrary, my comment here should be far more accurate than yours because the most severe and universal distortions that cause much of the music embedded in a given recording to remain inaudible (though read and processed) are generated in the components regardless of format. I said something like, “ all that you hear audibly from a high-rez recording info wise may well be not much different than the entire music info contents embedded in an MP3 track”. Ultimately I can only approximate the number of bytes in a given MP3 recording. On the other hand I can only offer a SWAG as to how much of the music info remains audible at the speakers after reading and processing nearly every last bit of a track that was embedded in say a Redbook recording due to a much raised noise floor induced by various distortions.
And this is where we really need to get back to fundamentals. For example, if Harley is at all accurate when he speculated something catastrophic is occurring somewhere in the music processing chain (Harley and Meitner think it’s at the recording mic diaphragm) or if Valin’s claim that even our very best playback systems are capturing only 15% of the “magic” of the live performance has any teeth whatsoever, what are you really hearing at your speakers’ output?
Now if I stopped there, you might come back and based on your knowledge of the digital process, clearly show me (on paper) how I and others are wrong. I can't show you on paper that you are wrong, but I can postulate that it is obvious you and others, e.g. Stuart, in spite of everything you think you know simply cannot be taking into account every pertinent aspect of what is happening between reading the music info embedded in the recorded medium and writing the music info (at the speakers).
Case-in-point:
1. Again, the 2 quotes I provided earlier by Harley and Valin, along with Atkinson of Stereophile who in the Sept. 2009 issue made a claim not too dissimilar to Harley’s. Remember that Harley used the word "catastrophic" with regard to the levels of music at least he hears. Valin speculated, "only 15% of the 'magic'" was being captured by the industry's best playback systems of 2008. Catastrophic is a powerful word and I doubt Harley intended it to be taken lightly. I have several friends who think Valin is too optimistic with his "15% of the magic" claim. And of course Valin is not claiming only 15% of the music is audible, but that’s another story. To the best of my knowledge these 3 have yet to rescind or recant these statements and I’m confident the reason they haven’t is because they are fairly accurate.
2. Several years ago I engaged in some shall we say meaningful dialogue with Mark Levinson and John Curl. Both admitted that even though their SOTA-level sensitive measuring instruments were professionally calibrated, the instruments still routinely failed them when they could hear differences the measuring instrument could not discern. They both said it was a common occurrence. Eventually Curl admitted that every one of his designs (and others' too) had at least 1 serious and unexplainable flaw that baffled most / all mfg'ers. I do not recall Levinson admitting to Curl's same serious flaw claim but Curl was speaking for all components and Levinson did not refute it.
3. There are other similar “catastrophic” like claims sporadically throughout the industry, though regrettably not near enough to raise awareness or concern.
The point being that results and findings in the lab, at the test bench, at the calculator, and/or in the science books are one thing, but ultimately it is the end result at the speaker that matters. And what should be obvious to anybody who intimately knows live music who also intimately knows the limitations of even a SOTA-level playback system, should also realize there is a huge (think catastrophic) gulf that separates the two.
What does catastrophic translate to in percentage of all the bits read and processed from a recording that fall below a much raised noise floor? A 5%, 10% reduction? Doubtful. Catastrophic being a very strong word (def. causing great damage, extremely unfortunate), I'd venture from that definition alone the amount of music that remains inaudible has to be somewhere around 35 - 60% or more of the music remaining inaudible at the speaker.
Based on my own experience of recovering much of this inaudible music info, I'd guess the amount of music remaining inaudible has to be at least around the 50 - 60% mark. Which, from my earlier point, is not all that far from the 75% of the music info you said was discarded during the encoding process to produce an MP3 track.
Not that it matters here, but with something like Harley’s catastrophic claim in mind (assuming there’s truth to it), maybe now it’s a bit easier to understand when at least some claim that the piano is the most difficult instrument to accurately reproduce or that opera or choral music is the most tortious of all music genre to listen to on a given system causing breakup and flattening out, or why some systems seem to fall apart when playing dynamic or complex material, or why so many people have tried to go down the multi-channel path because their 2-ch system wasn’t even close to presenting a 3-D presentation. How about a sharp note in the upper registers of a percussive instrument like a piano can can be borderline overwhelming even if the note is heard in its entirety. How do you think that sharp note would sound if you strip away say 50% of its less prominent information and suddenly that note is coming down off the soundstage and making like a laser beam straight for your ear. This catastrophic percentage that remains inaudible actually explains much for all the coveted attributes and characteristics we seek in reproduced music but cannot hear without a stethoscope and our own reference material that we must be so intimately familiar with. This may also explain why the audio community has learned long ago, to keep the listening volume significantly lower than live performance levels. i.e. nobody's going to wince or get bleeding ears from elevator music.
IMO, this could explain why or how somebody like Stuart can innovate a new technology or format and be confident that he's got himself a winner. That is, so long as he stays in the lab comparing measured readings and algorithms there. This could also explain why Stuart claims with confidence, that with MQA we are finally able to hear exactly what the engineers heard in the studio. Because on paper and in the lab he might well be 100% accurate. But by the time his technology is executed in the listening room, for some-to-many, like other formats, there is nothing overly special to hear. Which seems to be what’s happening from those who do not appear to have a stake in MQA’s success.
But even then, while some of us may consider our components as sensitive instruments, hence we install line conditioners, special power cables, outlets, fuses, plugs, etc. etc, or place our components on boxes of kitty litter, bicycle inner-tubes, sorbathane, etc., I've yet to come across an engineer or lab rat who thinks their the same way about their sensitive measuring instruments, analyzers, encoders, converters, etc. I'd have to venture that Stuart like the others in this regard and therefore, is simply working within the confines of his own sand box.
Again, I quote Tesla, "Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.”
I would attest that is exactly what is happening here. And I’m not even thinking about Bob Stuart so much as I’m thinking of all the "music lovers" and “audiophiles" who’ve convinced themselves they’ve had to become engineer-types and digital experts just to appreciate music from a well-engineered high-rez recording. And if one can’t understand or chooses not to go down that rabbit hole then they must not know what they are listening to.
Case-in-point. John Atkinson’s recent endorsement of the Vandersteen Model 7A speakers, “made the hairs on the back of my neck stand up…., musically perfect…., across the board.” Taken on its face, from that one endorsement Atkinson revealed to the entire industry he doesn’t know what the frick he’s talking about when it comes to discerning what he hears. But since he’s such a measuring weenie and bit fiddler dealing with some of the technical aspects of music production and playback, nobody seems to care that he may well have tin ears, simply because in other aspects he comes across as intelligent.
BTW Tbone, you can turn in your “audiophile” credentials now.
Sorry but can we stay on topic of MQA and Stuart? If you have a rant against reviewers of products in general, please make a thread on that topic.