Anyone heard about Meridian's new project called MQA

First, as I recall the deviation started with Orb claiming a superior sound from active speakers, my response to that, then Don chiming in with his opinions.
Although as I kept pointing out there IS similarities between MQA philosophy and how active speakers actually correct phase-"timing" issues at both extreme frequency range (for speakers more prominent with bass though and especially those with port reflex) and at the crossover - I think I explained my reasoning reasonably often.
This came about because someone picked up on Meridian using the word natural and its problems with traditional ADC-upsampling/downsampling-DAC, it can be said that same "natural" applies in same way to the active speaker that also correct comparable issues (they do more than this but beyond scope of this thread) - the context and focus I am on about anyway.
I then provided equivalent info on using words like natural and organic from Grimm Audio with their digial and corrective active speaker.
So yeah it is still on topic :)

Cheers
Orb
 
Last edited:
in my demo they did play Roberta Flack's 'Killing Me Softly' with and without MQA.......according to what they told us. it sounded better but so what. it was all staged so everything they said and we heard required faith in Bob 'PCM' Stuart.

I have less than zero faith in Bob Stuart's future music format vision.

at best my demo was a weak data point. at worst it was a fraud. who knows? time will tell.

I believe it was a comparison between a low bit rate mp3 file and MQA. I am not away of any MQA demo between a high rez and an MQA track taken from the same master.
 
I believe it was a comparison between a low bit rate mp3 file and MQA. I am not away of any MQA demo between a high rez and an MQA track taken from the same master.
I too would be surprised if they had done a real MQA vs not demo. At least I wouldn't give how subtle the difference might be :).
 
Amir, are you saying there's no such thing as an inferior-engineered MP3 recording? I realize it's down-converted to MP3 format, but somebody still has to engineer that process, right? If 2 different companies where to down-convert the same recording to MP3, does not one outcome stand to possibly be superior or inferior to the other? What am I missing here?
Well, first thing is terminology. We are in a highly technical topic and I am trying to guide us to use the right terms. MP3 files are created using software. The name of the software is called an Encoder. The thing that converts the bit stream back to PCM is called the decoder. There is no engineering involved as I explained as that term is reserved for hardware and or hardware/software combination. It is also not a recording because this is not a recording. The right way to ask your question is, "are you saying there is no such thing as an inferior MP3 encoder?"

The answer is yes, there definitely can and is less than performant MP3 encoders. Just a bit of background, international standards for audio and video only specify what the compressed bit stream can look like so that a decoder can be built to play the stream. They do not stipulate how you build an encoder which is the more complicated part. An audio codec for example can have choices of timing windows it uses to trade off between efficiency and time resolution -- much like what MQA attempts to do. Longer windows give higher frequency resolution allowing better/more optimal decisions to be made to compress that "frame" of audio (tens of milliseconds usually). But any artifacts created gets smeared across the same timing window causing transients to sound muddy, less distinct. It is the encoder's job to analyze the PCM audio samples and decide which timing window to use. This is actually quite a difficult job as the trade off between efficiency and maintaining transient response is non-trivial. Even detecting what is or is not a transient can be very challenging. Think of high-hats in music with lots of other things playing. The former is a transient but finding it inside other complex waveforms may be darn impossible.

Back to the story, standards organizations tend to release what they call a "reference encoder." In this case, Fraunhofer Institute who championed MP3, release the so called FHG encoder which many companies licensed and released. Being a reference encoder, it was super slow at the time but for research and standardization work, that was fine. My group at Microsoft licensed their encoder and shipped it in the Windows Media Player after optimizing it for speed for example. FHG encoder from fidelity point of view was excellent.

One of the most famous encoders is LAME which was an open-source/community attempt to build an alternative encoder. Since FHG Encoder was open source, taking it and making something else was not hard. People think of LAME as a higher quality encoder. I call it more of a marketing attempt than real in that regard. Audio Encoder can run in fixed bit rate mode, i.e. 128 Kbs, or telling it you want a quality from 0 to 10 and let it pick the bit rate (variable rate mode). At the time, using fixed mode was more popular even though encoders like ours in the media player readily allowed variable rate encoding. Back to LAME, many people used LAME in its VBR mode, and then comparing it to fixed rate mode of other encoders which was an improper comparison. VBR encoding is much easier because you can allow the peaks to go very high when for example encoding a transient. So for the same bit rate, a VBR encoded file sounds better at the same file size. If you turned off VBR mode, the quality of LAME actually dropped below FHG encoder at bit rates < 128 kbps because they relaxed a bit of filtering that the FHG encoder was performing above 17 KHz.

Long way of getting back to your original comment :), the difference between encoders starts to shrink when you move up above say, 192 Kbps. By the time you get to 320 Kbps which I used, there would not little to no difference because the encoder is not having to work hard. Bad and good decisions have little trade offs and so the fidelity differences vanish. The codec also achieves its asymptote of fidelity which for MP3, it is always shy of transparency. Better codecs like AAC and WMA Pro (that came out of my team at Microsoft) do much better but at the extreme, they are still shy of transparency for all content and all people.

As for your hearing tests, as I recall these tests were all done on your computer, correct? Not to say the results wouldn't be the same, but that's not same as performing the tests on your playback system where it really counts. And to the best of my knowledge your playback system is going to induce potentially far more distortions (much higher noise floor) than your desktop or laptop.
According to high-end audio mantra, the situation is inverse. I am using whatever 50 cent DAC on my laptop and went as low as a $100 in-ear monitor for my testing. Neither of these are considered high-end relative to home systems people use. The headphones do help though is blocking out noise.

The type of distortions that audio compression creates is somewhat orthogonal to what audio equipment produces. Compression artifacts range from sound getting brighter to "pre-echo" which is the effect I talked about with respect to transients. Nothing in a home system undoes that kind of artifact. Yes, an overly live room with strong reflections may make it hard to hear quieter parts of transients but it is not something we would want to count on being real.

That aside, whoever said I was ok with MP3 or its artifacts? If I said anything remotely close to that it was that I said something like, because of the distortions induced into our playback systems raising the noise floor to such a high level that a good percentage of even a well-engineered high-rez recording remains inaudible so that you and others may well be hearing no more music info content than what is contained in a "well-engineered" (think down-converted) MP3 recording (without any distortions masking any of the MP3's music info).
That is just not true. Distortions in the electronics are well below anything in your MP3 encodings. Let's remember that at 320 kbps we have thrown away 75% of the bits that was there prior to conversion. While we are very clever in picking the 25% that matters, corner cases do not allow such luxuries.

Uncompressed audio has the advantage that it allocates constant number of bits to the content, independent of what is being encoded. Lossy audio codecs however, allocated wildly different amounts of data in each frame of audio. As such, when they generate distortion, it is highly content dependent. The content is modulating the fidelity which when audible, can be quite annoying. It is just not common to use MP3 and high-resolution audio in the same sentence. If you have critical enough hearing to value high resolution audio, then MP3 for sure, as a general statement, produces lower fidelity, not more as you state. No case can be made of it being superior in any manner since its job is to add distortion, not take it away from the system.
 
The type of distortions that audio compression creates is somewhat orthogonal to what audio equipment produces. Compression artifacts range from sound getting brighter to "pre-echo" which is the effect I talked about with respect to transients. Nothing in a home system undoes that kind of artifact. Yes, an overly live room with strong reflections may make it hard to hear quieter parts of transients but it is not something we would want to count on being real.
......

I must admit that was one of the things I appreciated Hans mentioning as a variable in his critical listening (albeit in an unfamiliar environment) and asking for the music to be turned down (sorry cannot remember if that was the 1st or 2nd video).
Cheers
Orb
 
If you have critical enough hearing to value high resolution audio, then MP3 for sure, as a general statement, produces lower fidelity, not more as you state. No case can be made of it being superior in any manner since its job is to add distortion, not take it away from the system.

Well stated post ...

When attempting to educate or perhaps convert younger people to better fidelity, I`ve demo the above on my (dedicated to music) Galaxy S3, comparing the original WAV file to any subsequent MP3 copy. The differences are often well beyond obvious.

The day I entertain MP3 as a viable hi-fidelity source within my system, is the day I`ll hand over my audiophile credentials.
 
Amir,

Did you ever do any testing at MS using the Ogg Vorbis codecs? Especially interested in whether or not Ogg has been compared using higher sample rates. Ogg will compress, and playback up to 192 khz sample rates. I have wondered with the claims of the benefit of hirez whether good testing would show 192 khz Ogg distinguishable from say uncompressed wav at 44 or 48 khz. Like MQA Ogg will save more space at ultrasonic frequencies when there is little content there. Yet it can recreate mostly what is in those ultrasonic ranges upon playback.
 
Amir,

Did you ever do any testing at MS using the Ogg Vorbis codecs?
Oh yes, extensively. As cheats go, this was the biggest one in that Ogg ran only in variable bit rate mode yet everyone would compare it to fixed bit rate of other codecs. You can find conversations between Marty and I in online forums with me telling him this and him looking away as if it is just fine.

Especially interested in whether or not Ogg has been compared using higher sample rates. Ogg will compress, and playback up to 192 khz sample rates.
Application of lossy codecs to high sampling rates is a bizarre thing. While we also did that with WMA Pro, the science behind it is pretty sketchy. The way lossy compression works is that a psychoacoustic model is placed over the music in frequency domain. That model by definition puts no value on > 20 Khz since we don't hear those frequencies. As used that way, it would be the same as encoding at 44.1 Khz! What we and others did was place some random importance on ultrasonic frequencies with some kind of sloping down graph. But there was no way to validate that. It is not like you could play a 30 Khz tone and see what distortions people could hear. The source could not be heard and given that, the distortion at so many db below it would be inaudible just the same.

I remember being at Audio Engineering Society conference where a guy from FHG presented the same for MP3. Then someone asked him the same question of what models they used for ultrasonics, and he did not even have a talking point to utter!

I have wondered with the claims of the benefit of hirez whether good testing would show 192 khz Ogg distinguishable from say uncompressed wav at 44 or 48 khz. Like MQA Ogg will save more space at ultrasonic frequencies when there is little content there. Yet it can recreate mostly what is in those ultrasonic ranges upon playback.
The difference is that Ogg or other lossy high resolution codecs will treat the entire spectrum uniformly. They would actually confuse the ultrasonic noise for real content with high entropy (i.e. randomness) and as a result, try to steal bits from audible band to encode it. Nothing good comes out of that because you are trying to preserve ultrasonic content that we can't hear, at the expense of lower fidelity of in-band spectrum that we do hear. If the bit rates are high enough, the artifacts are likely not audible but again, the logic of it does not hold together.

What MQA does is to preserve the lower frequency spectrum as is, and fold in the high frequency spectrum into the low order bits of 24 bit samples which would be noise otherwise. So it seems like a more clever way to do this, much like HDCD did.
 
Oh yes, extensively. As cheats go, this was the biggest one in that Ogg ran only in variable bit rate mode yet everyone would compare it to fixed bit rate of other codecs. You can find conversations between Marty and I in online forums with me telling him this and him looking away as if it is just fine.


Application of lossy codecs to high sampling rates is a bizarre thing. While we also did that with WMA Pro, the science behind it is pretty sketchy. The way lossy compression works is that a psychoacoustic model is placed over the music in frequency domain. That model by definition puts no value on > 20 Khz since we don't hear those frequencies. As used that way, it would be the same as encoding at 44.1 Khz! What we and others did was place some random importance on ultrasonic frequencies with some kind of sloping down graph. But there was no way to validate that. It is not like you could play a 30 Khz tone and see what distortions people could hear. The source could not be heard and given that, the distortion at so many db below it would be inaudible just the same.

I remember being at Audio Engineering Society conference where a guy from FHG presented the same for MP3. Then someone asked him the same question of what models they used for ultrasonics, and he did not even have a talking point to utter!


The difference is that Ogg or other lossy high resolution codecs will treat the entire spectrum uniformly. They would actually confuse the ultrasonic noise for real content with high entropy (i.e. randomness) and as a result, try to steal bits from audible band to encode it. Nothing good comes out of that because you are trying to preserve ultrasonic content that we can't hear, at the expense of lower fidelity of in-band spectrum that we do hear. If the bit rates are high enough, the artifacts are likely not audible but again, the logic of it does not hold together.

What MQA does is to preserve the lower frequency spectrum as is, and fold in the high frequency spectrum into the low order bits of 24 bit samples which would be noise otherwise. So it seems like a more clever way to do this, much like HDCD did.

Thanks for the insightful info.

It mirrors somewhat or at least explains what I heard just doing it for myself. In the quality settings, it seems to randomly on a slope give credence to ultrasonics as you described at the higher quality values. At lower quality settings it cut everything out above a frequency of 24 khz. So comparing say a compressed 192 khz file to using SRC to make that 48 khz sample rate and using OGG from that point, the higher bit rate seemed to have more artifacts. It actually could recover some ultrasonic test tones pretty well, but with music I guess your explanation of stealing bits is why it would have more artifacts at the higher sample rate.
 
Oh yes, extensively. As cheats go, this was the biggest one in that Ogg ran only in variable bit rate mode yet everyone would compare it to fixed bit rate of other codecs. You can find conversations between Marty and I in online forums with me telling him this and him looking away as if it is just fine.


Application of lossy codecs to high sampling rates is a bizarre thing. While we also did that with WMA Pro, the science behind it is pretty sketchy. The way lossy compression works is that a psychoacoustic model is placed over the music in frequency domain. That model by definition puts no value on > 20 Khz since we don't hear those frequencies. As used that way, it would be the same as encoding at 44.1 Khz! What we and others did was place some random importance on ultrasonic frequencies with some kind of sloping down graph. But there was no way to validate that. It is not like you could play a 30 Khz tone and see what distortions people could hear. The source could not be heard and given that, the distortion at so many db below it would be inaudible just the same.

I remember being at Audio Engineering Society conference where a guy from FHG presented the same for MP3. Then someone asked him the same question of what models they used for ultrasonics, and he did not even have a talking point to utter!


The difference is that Ogg or other lossy high resolution codecs will treat the entire spectrum uniformly. They would actually confuse the ultrasonic noise for real content with high entropy (i.e. randomness) and as a result, try to steal bits from audible band to encode it. Nothing good comes out of that because you are trying to preserve ultrasonic content that we can't hear, at the expense of lower fidelity of in-band spectrum that we do hear. If the bit rates are high enough, the artifacts are likely not audible but again, the logic of it does not hold together.

What MQA does is to preserve the lower frequency spectrum as is, and fold in the high frequency spectrum into the low order bits of 24 bit samples which would be noise otherwise. So it seems like a more clever way to do this, much like HDCD did.

Thanks for the insightful info.

It mirrors somewhat or at least explains what I heard just doing it for myself. In the quality settings, it seems to randomly on a slope give credence to ultrasonics as you described at the higher quality values. At lower quality settings it cut everything out above a frequency of 24 khz. So comparing say a compressed 192 khz file to using SRC to make that 48 khz sample rate and using OGG from that point, the higher bit rate seemed to have more artifacts. It actually could recover some ultrasonic test tones pretty well, but with music I guess your explanation of stealing bits is why it would have more artifacts at the higher sample rate.
 
Thanks for the MP3 education, Amir. Like most anything, I just assumed there was potential variations of quality with MP3.

That is just not true. Distortions in the electronics are well below anything in your MP3 encodings. Let's remember that at 320 kbps we have thrown away 75% of the bits that was there prior to conversion. While we are very clever in picking the 25% that matters, corner cases do not allow such luxuries. Uncompressed audio has the advantage that it allocates constant number of bits to the content, independent of what is being encoded. Lossy audio codecs however, allocated wildly different amounts of data in each frame of audio. As such, when they generate distortion, it is highly content dependent. The content is modulating the fidelity which when audible, can be quite annoying. It is just not common to use MP3 and high-resolution audio in the same sentence. If you have critical enough hearing to value high resolution audio, then MP3 for sure, as a general statement, produces lower fidelity, not more as you state. No case can be made of it being superior in any manner since its job is to add distortion, not take it away from the system.

To the contrary, my comment here should be far more accurate than yours because the most severe and universal distortions that cause much of the music embedded in a given recording to remain inaudible (though read and processed) are generated in the components regardless of format. I said something like, “ all that you hear audibly from a high-rez recording info wise may well be not much different than the entire music info contents embedded in an MP3 track”. Ultimately I can only approximate the number of bytes in a given MP3 recording. On the other hand I can only offer a SWAG as to how much of the music info remains audible at the speakers after reading and processing nearly every last bit of a track that was embedded in say a Redbook recording due to a much raised noise floor induced by various distortions.

And this is where we really need to get back to fundamentals. For example, if Harley is at all accurate when he speculated something catastrophic is occurring somewhere in the music processing chain (Harley and Meitner think it’s at the recording mic diaphragm) or if Valin’s claim that even our very best playback systems are capturing only 15% of the “magic” of the live performance has any teeth whatsoever, what are you really hearing at your speakers’ output?

Now if I stopped there, you might come back and based on your knowledge of the digital process, clearly show me (on paper) how I and others are wrong. I can't show you on paper that you are wrong, but I can postulate that it is obvious you and others, e.g. Stuart, in spite of everything you think you know simply cannot be taking into account every pertinent aspect of what is happening between reading the music info embedded in the recorded medium and writing the music info (at the speakers).

Case-in-point:

1. Again, the 2 quotes I provided earlier by Harley and Valin, along with Atkinson of Stereophile who in the Sept. 2009 issue made a claim not too dissimilar to Harley’s. Remember that Harley used the word "catastrophic" with regard to the levels of music at least he hears. Valin speculated, "only 15% of the 'magic'" was being captured by the industry's best playback systems of 2008. Catastrophic is a powerful word and I doubt Harley intended it to be taken lightly. I have several friends who think Valin is too optimistic with his "15% of the magic" claim. And of course Valin is not claiming only 15% of the music is audible, but that’s another story. To the best of my knowledge these 3 have yet to rescind or recant these statements and I’m confident the reason they haven’t is because they are fairly accurate.

2. Several years ago I engaged in some shall we say meaningful dialogue with Mark Levinson and John Curl. Both admitted that even though their SOTA-level sensitive measuring instruments were professionally calibrated, the instruments still routinely failed them when they could hear differences the measuring instrument could not discern. They both said it was a common occurrence. Eventually Curl admitted that every one of his designs (and others' too) had at least 1 serious and unexplainable flaw that baffled most / all mfg'ers. I do not recall Levinson admitting to Curl's same serious flaw claim but Curl was speaking for all components and Levinson did not refute it.

3. There are other similar “catastrophic” like claims sporadically throughout the industry, though regrettably not near enough to raise awareness or concern.

The point being that results and findings in the lab, at the test bench, at the calculator, and/or in the science books are one thing, but ultimately it is the end result at the speaker that matters. And what should be obvious to anybody who intimately knows live music who also intimately knows the limitations of even a SOTA-level playback system, should also realize there is a huge (think catastrophic) gulf that separates the two.

What does catastrophic translate to in percentage of all the bits read and processed from a recording that fall below a much raised noise floor? A 5%, 10% reduction? Doubtful. Catastrophic being a very strong word (def. causing great damage, extremely unfortunate), I'd venture from that definition alone the amount of music that remains inaudible has to be somewhere around 35 - 60% or more of the music remaining inaudible at the speaker.

Based on my own experience of recovering much of this inaudible music info, I'd guess the amount of music remaining inaudible has to be at least around the 50 - 60% mark. Which, from my earlier point, is not all that far from the 75% of the music info you said was discarded during the encoding process to produce an MP3 track.

Not that it matters here, but with something like Harley’s catastrophic claim in mind (assuming there’s truth to it), maybe now it’s a bit easier to understand when at least some claim that the piano is the most difficult instrument to accurately reproduce or that opera or choral music is the most tortious of all music genre to listen to on a given system causing breakup and flattening out, or why some systems seem to fall apart when playing dynamic or complex material, or why so many people have tried to go down the multi-channel path because their 2-ch system wasn’t even close to presenting a 3-D presentation. How about a sharp note in the upper registers of a percussive instrument like a piano can can be borderline overwhelming even if the note is heard in its entirety. How do you think that sharp note would sound if you strip away say 50% of its less prominent information and suddenly that note is coming down off the soundstage and making like a laser beam straight for your ear. This catastrophic percentage that remains inaudible actually explains much for all the coveted attributes and characteristics we seek in reproduced music but cannot hear without a stethoscope and our own reference material that we must be so intimately familiar with. This may also explain why the audio community has learned long ago, to keep the listening volume significantly lower than live performance levels. i.e. nobody's going to wince or get bleeding ears from elevator music.

IMO, this could explain why or how somebody like Stuart can innovate a new technology or format and be confident that he's got himself a winner. That is, so long as he stays in the lab comparing measured readings and algorithms there. This could also explain why Stuart claims with confidence, that with MQA we are finally able to hear exactly what the engineers heard in the studio. Because on paper and in the lab he might well be 100% accurate. But by the time his technology is executed in the listening room, for some-to-many, like other formats, there is nothing overly special to hear. Which seems to be what’s happening from those who do not appear to have a stake in MQA’s success.

But even then, while some of us may consider our components as sensitive instruments, hence we install line conditioners, special power cables, outlets, fuses, plugs, etc. etc, or place our components on boxes of kitty litter, bicycle inner-tubes, sorbathane, etc., I've yet to come across an engineer or lab rat who thinks their the same way about their sensitive measuring instruments, analyzers, encoders, converters, etc. I'd have to venture that Stuart like the others in this regard and therefore, is simply working within the confines of his own sand box.

Again, I quote Tesla, "Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.”

I would attest that is exactly what is happening here. And I’m not even thinking about Bob Stuart so much as I’m thinking of all the "music lovers" and “audiophiles" who’ve convinced themselves they’ve had to become engineer-types and digital experts just to appreciate music from a well-engineered high-rez recording. And if one can’t understand or chooses not to go down that rabbit hole then they must not know what they are listening to.

Case-in-point. John Atkinson’s recent endorsement of the Vandersteen Model 7A speakers, “made the hairs on the back of my neck stand up…., musically perfect…., across the board.” Taken on its face, from that one endorsement Atkinson revealed to the entire industry he doesn’t know what the frick he’s talking about when it comes to discerning what he hears. But since he’s such a measuring weenie and bit fiddler dealing with some of the technical aspects of music production and playback, nobody seems to care that he may well have tin ears, simply because in other aspects he comes across as intelligent.

BTW Tbone, you can turn in your “audiophile” credentials now.
 
>>BTW Tbone, you can turn in your “audiophile” credentials now.<<

K, I'll do that right away John ... I'm certain everyone else here will instantly defer to your superior system/knowledge.
 
IMO, this could explain why or how somebody like Stuart can innovate a new technology or format and be confident that he's got himself a winner. That is, so long as he stays in the lab comparing measured readings and algorithms there. This could also explain why Stuart claims with confidence, that with MQA we are finally able to hear exactly what the engineers heard in the studio. Because on paper and in the lab he might well be 100% accurate.
According to Bob, this technology has been in development for a few years and has fully incorporated field evaluations. This is from the AES Paper:

"This approach to re-coding results in superior sound and
significantly lower data-rate when compared to
unstructured encoding and playback, and has been
enthusiastically supported in listening trials with a
number of recording and mastering engineers
, artists and
producers."


But by the time his technology is executed in the listening room, for some-to-many, like other formats, there is nothing overly special to hear. Which seems to be what’s happening from those who do not appear to have a stake in MQA’s success.
Audiophiles have beliefs that range from one extreme to another. As such, they do not present any reliable data point to argue from. So XYZ person doesn't like MQA. And another does. Neither is a data point for me. The data point is a controlled listening test presented, or one that I can conduct, with MQA being turned on and off. That is the only thing that matters right now in arguing the point with me.

If you have not done this evaluation, nor analyzed the research, then I am not sure from what basis you are arguing your point here.

But even then, while some of us may consider our components as sensitive instruments, hence we install line conditioners, special power cables, outlets, fuses, plugs, etc. etc, or place our components on boxes of kitty litter, bicycle inner-tubes, sorbathane, etc., I've yet to come across an engineer or lab rat who thinks their the same way about their sensitive measuring instruments, analyzers, encoders, converters, etc. I'd have to venture that Stuart like the others in this regard and therefore, is simply working within the confines of his own sand box.
You continue to make remarks about someone you don't know. And speculating as you are doing here. Let's stick to what we know. If you know he does some of the above and something is wrong with it, then please explain it. Let's not make political statements about everyone who drives a red car.

Bob is unlike 99.99% of the high-end audio personalities. He has credentials that hold against the best of the best. He has to earn that on objective basis while arguing his subjective point of view. This extremely hard to do and he has done it. You can't put him down with saying this and that.

Again, I quote Tesla, "Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.”
Sorry but the quote has no relation to the reality of this conversation :). Bob is not a theoretician. He runs a successful high-end audio company that has been in business for 30+ years. And he has produced products with MQA in them already. He has investigated the theory, the psychoacoustics behind it, and implication in real products. The improvements at the end of the day may be too fleeting to matter but it is anything but what you quote from Tesla.

Case-in-point. John Atkinson’s recent endorsement of the Vandersteen Model 7A speakers....
Sorry but can we stay on topic of MQA and Stuart? If you have a rant against reviewers of products in general, please make a thread on that topic.
 
Thanks for the MP3 education, Amir. Like most anything, I just assumed there was potential variations of quality with MP3.



To the contrary, my comment here should be far more accurate than yours because the most severe and universal distortions that cause much of the music embedded in a given recording to remain inaudible (though read and processed) are generated in the components regardless of format. I said something like, “ all that you hear audibly from a high-rez recording info wise may well be not much different than the entire music info contents embedded in an MP3 track”. Ultimately I can only approximate the number of bytes in a given MP3 recording. On the other hand I can only offer a SWAG as to how much of the music info remains audible at the speakers after reading and processing nearly every last bit of a track that was embedded in say a Redbook recording due to a much raised noise floor induced by various distortions.

And this is where we really need to get back to fundamentals. For example, if Harley is at all accurate when he speculated something catastrophic is occurring somewhere in the music processing chain (Harley and Meitner think it’s at the recording mic diaphragm) or if Valin’s claim that even our very best playback systems are capturing only 15% of the “magic” of the live performance has any teeth whatsoever, what are you really hearing at your speakers’ output?

Now if I stopped there, you might come back and based on your knowledge of the digital process, clearly show me (on paper) how I and others are wrong. I can't show you on paper that you are wrong, but I can postulate that it is obvious you and others, e.g. Stuart, in spite of everything you think you know simply cannot be taking into account every pertinent aspect of what is happening between reading the music info embedded in the recorded medium and writing the music info (at the speakers).

Case-in-point:

1. Again, the 2 quotes I provided earlier by Harley and Valin, along with Atkinson of Stereophile who in the Sept. 2009 issue made a claim not too dissimilar to Harley’s. Remember that Harley used the word "catastrophic" with regard to the levels of music at least he hears. Valin speculated, "only 15% of the 'magic'" was being captured by the industry's best playback systems of 2008. Catastrophic is a powerful word and I doubt Harley intended it to be taken lightly. I have several friends who think Valin is too optimistic with his "15% of the magic" claim. And of course Valin is not claiming only 15% of the music is audible, but that’s another story. To the best of my knowledge these 3 have yet to rescind or recant these statements and I’m confident the reason they haven’t is because they are fairly accurate.

2. Several years ago I engaged in some shall we say meaningful dialogue with Mark Levinson and John Curl. Both admitted that even though their SOTA-level sensitive measuring instruments were professionally calibrated, the instruments still routinely failed them when they could hear differences the measuring instrument could not discern. They both said it was a common occurrence. Eventually Curl admitted that every one of his designs (and others' too) had at least 1 serious and unexplainable flaw that baffled most / all mfg'ers. I do not recall Levinson admitting to Curl's same serious flaw claim but Curl was speaking for all components and Levinson did not refute it.

3. There are other similar “catastrophic” like claims sporadically throughout the industry, though regrettably not near enough to raise awareness or concern.

The point being that results and findings in the lab, at the test bench, at the calculator, and/or in the science books are one thing, but ultimately it is the end result at the speaker that matters. And what should be obvious to anybody who intimately knows live music who also intimately knows the limitations of even a SOTA-level playback system, should also realize there is a huge (think catastrophic) gulf that separates the two.

What does catastrophic translate to in percentage of all the bits read and processed from a recording that fall below a much raised noise floor? A 5%, 10% reduction? Doubtful. Catastrophic being a very strong word (def. causing great damage, extremely unfortunate), I'd venture from that definition alone the amount of music that remains inaudible has to be somewhere around 35 - 60% or more of the music remaining inaudible at the speaker.

Based on my own experience of recovering much of this inaudible music info, I'd guess the amount of music remaining inaudible has to be at least around the 50 - 60% mark. Which, from my earlier point, is not all that far from the 75% of the music info you said was discarded during the encoding process to produce an MP3 track.

Not that it matters here, but with something like Harley’s catastrophic claim in mind (assuming there’s truth to it), maybe now it’s a bit easier to understand when at least some claim that the piano is the most difficult instrument to accurately reproduce or that opera or choral music is the most tortious of all music genre to listen to on a given system causing breakup and flattening out, or why some systems seem to fall apart when playing dynamic or complex material, or why so many people have tried to go down the multi-channel path because their 2-ch system wasn’t even close to presenting a 3-D presentation. How about a sharp note in the upper registers of a percussive instrument like a piano can can be borderline overwhelming even if the note is heard in its entirety. How do you think that sharp note would sound if you strip away say 50% of its less prominent information and suddenly that note is coming down off the soundstage and making like a laser beam straight for your ear. This catastrophic percentage that remains inaudible actually explains much for all the coveted attributes and characteristics we seek in reproduced music but cannot hear without a stethoscope and our own reference material that we must be so intimately familiar with. This may also explain why the audio community has learned long ago, to keep the listening volume significantly lower than live performance levels. i.e. nobody's going to wince or get bleeding ears from elevator music.

IMO, this could explain why or how somebody like Stuart can innovate a new technology or format and be confident that he's got himself a winner. That is, so long as he stays in the lab comparing measured readings and algorithms there. This could also explain why Stuart claims with confidence, that with MQA we are finally able to hear exactly what the engineers heard in the studio. Because on paper and in the lab he might well be 100% accurate. But by the time his technology is executed in the listening room, for some-to-many, like other formats, there is nothing overly special to hear. Which seems to be what’s happening from those who do not appear to have a stake in MQA’s success.

But even then, while some of us may consider our components as sensitive instruments, hence we install line conditioners, special power cables, outlets, fuses, plugs, etc. etc, or place our components on boxes of kitty litter, bicycle inner-tubes, sorbathane, etc., I've yet to come across an engineer or lab rat who thinks their the same way about their sensitive measuring instruments, analyzers, encoders, converters, etc. I'd have to venture that Stuart like the others in this regard and therefore, is simply working within the confines of his own sand box.

Again, I quote Tesla, "Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.”

I would attest that is exactly what is happening here. And I’m not even thinking about Bob Stuart so much as I’m thinking of all the "music lovers" and “audiophiles" who’ve convinced themselves they’ve had to become engineer-types and digital experts just to appreciate music from a well-engineered high-rez recording. And if one can’t understand or chooses not to go down that rabbit hole then they must not know what they are listening to.

Case-in-point. John Atkinson’s recent endorsement of the Vandersteen Model 7A speakers, “made the hairs on the back of my neck stand up…., musically perfect…., across the board.” Taken on its face, from that one endorsement Atkinson revealed to the entire industry he doesn’t know what the frick he’s talking about when it comes to discerning what he hears. But since he’s such a measuring weenie and bit fiddler dealing with some of the technical aspects of music production and playback, nobody seems to care that he may well have tin ears, simply because in other aspects he comes across as intelligent.

BTW Tbone, you can turn in your “audiophile” credentials now.

Guys with the best measuring instruments hear things not shown in the measurements and conclude most of the issue is at the microphones? HA! Nice fantasy that. There are easily measurable problems at the speaker. So people positing what you describe immediately lose credibility. The issues with speakers are at least 10 times what they are with microphones, and yes it is easily measurable. I'll take those measurements over hair on the back of the neck.

I would say within the very real constraints of stereo recording and playback, speakers are responsible for 90% of the degradation, microphones 9% and everything else 1% (ignoring that poor mixing/mastering or excessive processing can be a magnitude worse than everything else). So if you want to make real rather than illusory progress work on the speakers and room. You will still never get more than a ghostly representation of the real event with only stereo to work with. Note I didn't say what percent of the real performance you get only that nearly all of the problems when using good recordings are at the speaker/room end of things. Of course an even bigger issue is more than 90% of the recordings are so messed with you don't have a chance if everything else were 100% perfect. Even the great majority of well known names in 'minimalist' recordings are far from being two good mics feeding two good channels. If you are messing with outriggers for artificial space etc. it may present a more pleasing than usual facsimile, but it has been compromised heavily from having a chance to present something at the theoretical limits of two channel stereo. The issue at the other end of the chain isn't microphones so much as recording methodology.
 
Last edited:
Adding fuel to the fire? Answers another thread too.

From Bob Stuart today entitled "Who Cares?"

From time to time we come across audio 'experts' who 'know' that AAC/MP3 is 'good enough for the masses', using arguments like: 'people do most of their listening on the train'; 'these days music is just the background'.

These same 'experts' get very 'edgy' if you suggest CD wasn't perfect. The barrage begins: M&M, ABX, 'snake oil', 'marketing', 'Nyquist knows best', 'vested interest',-followed by appeals to outdated psychoacoustics or retreats into 'it doesn't matter in the real world'.

It's sad, but we shouldn't mind if someone doesn't care whether a recording is well made, delivered and played back. But I mind a lot if they try to dismiss such efforts as 'self-serving', 'misleading', 'delusional', or 'exploitative'. That is the true arrogance of the ignorant. Worse, if by doing so the landscape tilts and a generation of innocent miss out on the opportunity for better sound.

We need the recording industry to be sustainable. the companies that developed lossy codecs spent real money doing so; the telecoms poured huge resources into reducing bitrate to the lowest tolerable sound intelligibility. Is it somehow nobler to lower sound quality than to seek higher quality?

I've devoted a lot of time of understanding and making sound better, to enable better recordings that remove veils between the performer and the listener-all in the interest of deeper communication, enjoyment and preservation of the music.

The enduring insight is that our hearing is exquisite; it's robust (we can always recognize the tune in bizarre circumstances) but it also 'knows the real thing' right away.

Very few make the effort to get is as good as it can be. But it still drives my research. The good news is that so many experienced listeners and musicians agree.

Anyone else?
 
Adding fuel to the fire? Answers another thread too.

From Bob Stuart today entitled "Who Cares?"

From time to time we come across audio 'experts' who 'know' that AAC/MP3 is 'good enough for the masses', using arguments like: 'people do most of their listening on the train'; 'these days music is just the background'.

These same 'experts' get very 'edgy' if you suggest CD wasn't perfect. The barrage begins: M&M, ABX, 'snake oil', 'marketing', 'Nyquist knows best', 'vested interest',-followed by appeals to outdated psychoacoustics or retreats into 'it doesn't matter in the real world'.

It's sad, but we shouldn't mind if someone doesn't care whether a recording is well made, delivered and played back. But I mind a lot if they try to dismiss such efforts as 'self-serving', 'misleading', 'delusional', or 'exploitative'. That is the true arrogance of the ignorant. Worse, if by doing so the landscape tilts and a generation of innocent miss out on the opportunity for better sound.

We need the recording industry to be sustainable. the companies that developed lossy codecs spent real money doing so; the telecoms poured huge resources into reducing bitrate to the lowest tolerable sound intelligibility. Is it somehow nobler to lower sound quality than to seek higher quality?

I've devoted a lot of time of understanding and making sound better, to enable better recordings that remove veils between the performer and the listener-all in the interest of deeper communication, enjoyment and preservation of the music.

The enduring insight is that our hearing is exquisite; it's robust (we can always recognize the tune in bizarre circumstances) but it also 'knows the real thing' right away.

Very few make the effort to get is as good as it can be. But it still drives my research. The good news is that so many experienced listeners and musicians agree.

Anyone else?
It is very eloquently put and describes my endeavors and ideals in this field just the same.
 
Thanks for the MP3 education, Amir. Like most anything, I just assumed there was potential variations of quality with MP3.



To the contrary, my comment here should be far more accurate than yours because the most severe and universal distortions that cause much of the music embedded in a given recording to remain inaudible (though read and processed) are generated in the components regardless of format. I said something like, “ all that you hear audibly from a high-rez recording info wise may well be not much different than the entire music info contents embedded in an MP3 track”. Ultimately I can only approximate the number of bytes in a given MP3 recording. On the other hand I can only offer a SWAG as to how much of the music info remains audible at the speakers after reading and processing nearly every last bit of a track that was embedded in say a Redbook recording due to a much raised noise floor induced by various distortions.

And this is where we really need to get back to fundamentals. For example, if Harley is at all accurate when he speculated something catastrophic is occurring somewhere in the music processing chain (Harley and Meitner think it’s at the recording mic diaphragm) or if Valin’s claim that even our very best playback systems are capturing only 15% of the “magic” of the live performance has any teeth whatsoever, what are you really hearing at your speakers’ output?

Now if I stopped there, you might come back and based on your knowledge of the digital process, clearly show me (on paper) how I and others are wrong. I can't show you on paper that you are wrong, but I can postulate that it is obvious you and others, e.g. Stuart, in spite of everything you think you know simply cannot be taking into account every pertinent aspect of what is happening between reading the music info embedded in the recorded medium and writing the music info (at the speakers).

Case-in-point:

1. Again, the 2 quotes I provided earlier by Harley and Valin, along with Atkinson of Stereophile who in the Sept. 2009 issue made a claim not too dissimilar to Harley’s. Remember that Harley used the word "catastrophic" with regard to the levels of music at least he hears. Valin speculated, "only 15% of the 'magic'" was being captured by the industry's best playback systems of 2008. Catastrophic is a powerful word and I doubt Harley intended it to be taken lightly. I have several friends who think Valin is too optimistic with his "15% of the magic" claim. And of course Valin is not claiming only 15% of the music is audible, but that’s another story. To the best of my knowledge these 3 have yet to rescind or recant these statements and I’m confident the reason they haven’t is because they are fairly accurate.

2. Several years ago I engaged in some shall we say meaningful dialogue with Mark Levinson and John Curl. Both admitted that even though their SOTA-level sensitive measuring instruments were professionally calibrated, the instruments still routinely failed them when they could hear differences the measuring instrument could not discern. They both said it was a common occurrence. Eventually Curl admitted that every one of his designs (and others' too) had at least 1 serious and unexplainable flaw that baffled most / all mfg'ers. I do not recall Levinson admitting to Curl's same serious flaw claim but Curl was speaking for all components and Levinson did not refute it.

3. There are other similar “catastrophic” like claims sporadically throughout the industry, though regrettably not near enough to raise awareness or concern.

The point being that results and findings in the lab, at the test bench, at the calculator, and/or in the science books are one thing, but ultimately it is the end result at the speaker that matters. And what should be obvious to anybody who intimately knows live music who also intimately knows the limitations of even a SOTA-level playback system, should also realize there is a huge (think catastrophic) gulf that separates the two.

What does catastrophic translate to in percentage of all the bits read and processed from a recording that fall below a much raised noise floor? A 5%, 10% reduction? Doubtful. Catastrophic being a very strong word (def. causing great damage, extremely unfortunate), I'd venture from that definition alone the amount of music that remains inaudible has to be somewhere around 35 - 60% or more of the music remaining inaudible at the speaker.

Based on my own experience of recovering much of this inaudible music info, I'd guess the amount of music remaining inaudible has to be at least around the 50 - 60% mark. Which, from my earlier point, is not all that far from the 75% of the music info you said was discarded during the encoding process to produce an MP3 track.

Not that it matters here, but with something like Harley’s catastrophic claim in mind (assuming there’s truth to it), maybe now it’s a bit easier to understand when at least some claim that the piano is the most difficult instrument to accurately reproduce or that opera or choral music is the most tortious of all music genre to listen to on a given system causing breakup and flattening out, or why some systems seem to fall apart when playing dynamic or complex material, or why so many people have tried to go down the multi-channel path because their 2-ch system wasn’t even close to presenting a 3-D presentation. How about a sharp note in the upper registers of a percussive instrument like a piano can can be borderline overwhelming even if the note is heard in its entirety. How do you think that sharp note would sound if you strip away say 50% of its less prominent information and suddenly that note is coming down off the soundstage and making like a laser beam straight for your ear. This catastrophic percentage that remains inaudible actually explains much for all the coveted attributes and characteristics we seek in reproduced music but cannot hear without a stethoscope and our own reference material that we must be so intimately familiar with. This may also explain why the audio community has learned long ago, to keep the listening volume significantly lower than live performance levels. i.e. nobody's going to wince or get bleeding ears from elevator music.

IMO, this could explain why or how somebody like Stuart can innovate a new technology or format and be confident that he's got himself a winner. That is, so long as he stays in the lab comparing measured readings and algorithms there. This could also explain why Stuart claims with confidence, that with MQA we are finally able to hear exactly what the engineers heard in the studio. Because on paper and in the lab he might well be 100% accurate. But by the time his technology is executed in the listening room, for some-to-many, like other formats, there is nothing overly special to hear. Which seems to be what’s happening from those who do not appear to have a stake in MQA’s success.

But even then, while some of us may consider our components as sensitive instruments, hence we install line conditioners, special power cables, outlets, fuses, plugs, etc. etc, or place our components on boxes of kitty litter, bicycle inner-tubes, sorbathane, etc., I've yet to come across an engineer or lab rat who thinks their the same way about their sensitive measuring instruments, analyzers, encoders, converters, etc. I'd have to venture that Stuart like the others in this regard and therefore, is simply working within the confines of his own sand box.

Again, I quote Tesla, "Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.”

I would attest that is exactly what is happening here. And I’m not even thinking about Bob Stuart so much as I’m thinking of all the "music lovers" and “audiophiles" who’ve convinced themselves they’ve had to become engineer-types and digital experts just to appreciate music from a well-engineered high-rez recording. And if one can’t understand or chooses not to go down that rabbit hole then they must not know what they are listening to.

Case-in-point. John Atkinson’s recent endorsement of the Vandersteen Model 7A speakers, “made the hairs on the back of my neck stand up…., musically perfect…., across the board.” Taken on its face, from that one endorsement Atkinson revealed to the entire industry he doesn’t know what the frick he’s talking about when it comes to discerning what he hears. But since he’s such a measuring weenie and bit fiddler dealing with some of the technical aspects of music production and playback, nobody seems to care that he may well have tin ears, simply because in other aspects he comes across as intelligent.

BTW Tbone, you can turn in your “audiophile” credentials now.

My belief is that anyone who claims to numerically quantify the "amount of magic lost" due to a record - playback process is feeding us BS. It may or may not be marketing BS, but BS it certainly is. The use of the term "catastrophic" is even worse. For me, a "catastrophic loss" would be a loss of a loved one, not the loss of one's entire audio hobby, let alone a diminution of the realism or musical enjoyment of some recordings.
 
Sorry but can we stay on topic of MQA and Stuart? If you have a rant against reviewers of products in general, please make a thread on that topic.

Tomelex already started that thread.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu