Introspection and hyperbole control

:D:D:D

if I had some rainbows and unicorns i'd throw them your way too.....

Thanks, but they wouldn't have been much help once you implied that I am dishonest and have some kind of hidden agenda. As I've told you before, I don't have an agenda. I have nothing to sell, and I don't really care that some disagree with me. I do have a point of view, however, and I express it, just like the rest of the active members of WBF. The problem, I think, is that my POV often aligns with the available data on the subject and I'm not afraid to point that out. Seems to annoy some people. But we're talking about people now, not audio, so I'll leave it at that.

My current sig is much more indicative of my point of view, thanks.

Tim
 
(...)

My current sig is much more indicative of my point of view, thanks.

Tim

Tim,

IMHO your signature is a permanent ambiguous and unfriendly accusation on the high-end people. Just because you do not want to consider the facts as reported in good faith and analyze them does not imply they do not exist.
 
Thanks, but they wouldn't have been much help once you implied that I am dishonest and have some kind of hidden agenda. As I've told you before, I don't have an agenda. I have nothing to sell, and I don't really care that some disagree with me. I do have a point of view, however, and I express it, just like the rest of the active members of WBF. The problem, I think, is that my POV often aligns with the available data on the subject and I'm not afraid to point that out. Seems to annoy some people. But we're talking about people now, not audio, so I'll leave it at that.

My current sig is much more indicative of my point of view, thanks.

Tim

Tim,

I was trying to be clever and funny and had no mean spirited intensions. if I crossed the line into something personal please accept my apologies. I respect your approach to things and consider you an articulate intellectual who goes his own way. no doubt a few years back I miss interpreted your approach as some sort of 'anti-me' perspective. but that was then.

have a nice weekend.
 
Tim,

I was trying to be clever and funny and had no mean spirited intensions. if I crossed the line into something personal please accept my apologies. I respect your approach to things and consider you an articulate intellectual who goes his own way. no doubt a few years back I miss interpreted your approach as some sort of 'anti-me' perspective. but that was then.

have a nice weekend.
Thanks for that post Mike. It is the spirit that we like to see in WBF. I hope it is acceptable to Tim. It was very apropos for a thread that starts with " Introspection." :)
 
Tim,

I was trying to be clever and funny and had no mean spirited intensions. if I crossed the line into something personal please accept my apologies. I respect your approach to things and consider you an articulate intellectual who goes his own way. no doubt a few years back I miss interpreted your approach as some sort of 'anti-me' perspective. but that was then.

have a nice weekend.

Thanks for that, Mike. We're good.

Tim
 
Tim,

IMHO your signature is a permanent ambiguous and unfriendly accusation on the high-end people. Just because you do not want to consider the facts as reported in good faith and analyze them does not imply they do not exist.

We all have opinions. Enjoy yours.

Tim
 
Hi guys,

As you know, I’m just here to meet chicks and party, so this will have to be brief (for me).

Take any music encoder, AAC, MP3, WMA, etc. They all work in frequency domain. The take your music with its arbitrary waveform, decompose it in to individual frequencies that make it up (using DCT), perform data reduction and then convert it all back to time domain in decoder/player. What comes out of that system is clearly music which demonstrates that decomposition into individual frequencies works as theory proves it does.

Not only that, we loose a whole bunch and gain some new ones, too. Win!

Nine Inch Nails “Satellite” Audiophile Master 24/48 WAV


Screen Shot 2015-09-06 at 8.14.40 pm.jpg


Nine Inch Nails “Satellite” Audiophile Master 16/44.1 192 Kbps MP3


Screen Shot 2015-09-06 at 8.16.17 pm.jpg


(At the same point in time, the overall frequency of the WAV is a low 52.97 Hz in the left channel, while the overall frequency of the same channel of the MP3 is 20881.02 Hz.)

Phelonious Ponk said:
You didn't actually answer the question, but I'll take that as a no.

Tim, a heuristic method of enquiry has a lot of downsides. Thankfully, most of those downsides are well-documented, allowing one to avoid falling prey to them through the asking of questions to uncover biases. The same is not true of those who pursue an algorithmic line of enquiry. Those types come to conclusions quickly, and show great fortitude when asked to move from them, simply restating the conclusion they previously arrived at. When applied to other forms of social phenomena, it’s called “fundamentalism”.

If I haven’t answered no, it’s not because I’m being intentionally evasive. I’m simply looking at a set of possibilities, and asking questions about them.

Phelonious Ponk said:
If you're right, I wonder how audio engineers and designers develop/improve/QC the equipment they build and sell. They must listen to every chip, every tube, every driver. These sub components are, after all, consistent or not based on measurements and chosen on specs. These builders must listen to every component before selling it, and throw those that don't sound like the reference component in the dumpster. Given all of that, high-end is a heck of a bargain.

Let me encourage you - again - to ask someone else, other than me, who actually designs and implements concepts regarding audio components (many of whom frequent this very site), rather than persist with an assumption that they “must” engage in a process that appears to offend your sensibilities.

Fitzcaraldo215 said:
Well, I do not see that you have proven anything, in spite of your caveats. Yes, of course, all acoustic instruments, and even most electronic ones, have their own complex spectrum of harmonics on top of the fundamental. So, comparing pure, electronically generated tones containing no harmonics with an actual instrument containing many is weak as a starting point, though you try to explain it. Why would you even start your argument with that obvious spectral mismatch?

But, sheer complexity of instrument and musical harmonic spectra, in and of itself, does not invalidate Fourier, though you claim, somehow, it does. Sure, it makes the Fourier analysis more complex in the process of trying to get down to the sine/cosine wave analytical components of the waveform, but if I have a choice between believing Fourier and believing you, hmmm. Fourier has lasted for a long time and you have not won the Nobel Prize, not yet, at least.

I’m not asking anyone to believe anything. I have little interest in “proving” something, and zero interest in belief. Tim’s post, remember, specifically asked:

Phelonious Ponk said:
...whether or not (I) think thorough sine wave measurement is indicative of the performance of equipment when playing music...

In response I compared the two variables (sine waves, music) to one another using the exact same tones where one was generated by sine waves and the other via a piece of music. The comparison neither invalidates Fourier, nor asks you to “believe” anyone, especially me. The graphs are there for illustration, not to convince you of anything.

Don Hills said:
… as will any combination of sine waves, as you illustrated.
But the complexity of "real" music compared with sine waves is irrelevant to the point, that "the perfomance of the system when playing music" can be judged by measuring how accurately the "amplitude-over-time" relationship of the output of the system matches the input. You can accurately measure that with sine waves. (I'll grant you'll need more than one.) The goal is "a straight wire with gain". Of course, the devil is in the details of achieving that goal.

Thanks, Don. So far as it appears to me, no one has achieved that goal. Again, this seems to be a problem of description of what we think the mechanism is, rather than a problem of whether it is actually possible to achieve.

Groucho said:
It is a perennial dilemma in discussions of this sort: state that a system is linear and open oneself up to the 'gotcha' that a real world distortion level of -10000000dB means it is not linear hence the argument is invalidated. Or to pre-empt the point with a qualification such as "to all intents and purposes", "not literally" etc. in which case one invokes the 'gotcha' oneself.

If I'm not mistaken, the thread is titled “Introspection and hyperbole control”. Stating a system is linear when it cannot be, or that a component could have a “real world distortion level of -10000000dB” seems a little like hyperbole to me. However, it’s fully possible I may have a different perspective than others on what is hyperbole and what is not.

Groucho said:
A real world amplifier does, indeed, distort the signal (as does the very air between speakers and ears) and to a higher level when driving a difficult load. It is a question of being sensible about what constitutes audible distortion, and also doing something about it. I do know that active speakers greatly reduce the load on the amplifier, so my system is active. Is yours? If not, why not?

I’ve used Genelecs, PMCs, JBLs, Dynaudios and lived with ATCs for a long time. I also went active with a Naim system, and fooled around with a Meridian one. The ATCs were incredible tools for tracking and mixing and were I in the position of setting up another studio, I wouldn’t hesitate to go ATC again. But that’s not to say they would be my first choice for domestic listening.

And why not? In domestic situations, I prefer other sorts of compromises to the ones offered by active systems. And yes, I consider all systems to be a series of compromises which I am given the choice of preferring or not.

Amirm said:
I hear you but it seems that a bunch of us are arguing 853guy which seems a bit unfair. Let's take it a bit easy guys.

Well, I think so far, the arguments have been mostly on-topic and haven’t descended into personal attacks, so I’m doing good. Everyone else?

Phelonious Ponk said:
I’m still trying to ascertain 853guy's position. When I do, then maybe I'll argue with him. But I'll be gentle. ?

I’m not sure I can make it any clearer than what I put in my posts. Granted, I don’t have anywhere near the post count of yourself, but I doubt even if I upped it to your level it would make any difference in any case.

Maybe we don’t see the world the same way. I consider “fidelity/accuracy/truth to the recording” to be neither ideologically robust nor conceptually possible, and for some of the reasons I’ve mentioned in this thread. I consider sine waves to be incredibly useful as tools for understanding electro-mechanical phenomena, but not to be wholly equivalent with the musical waveforms of a time-based art form, which is problematic in light of the fact that there is no perfectly precise measurement of time. I consider calling something “linear” when it is not to be a form of exaggeration, and intellectually dishonest. I consider a reductionist methodology for choosing components to contain little utility value apropos a mechanism designed primarily to elicit emotions in the listener. Most of all, I consider progress to be non-linear in almost all spheres of life. That is: For every breakthrough and improvement we make in one domain, very often there is a tradeoff in another.

I’ve tried to illustrate why I think I know some of those those things, with the caveat that I may change my mind at any stage when confronted with new information or indeed, a new experience. I’ll continue to question things many other take for granted, not necessarily because I think they are “wrong”, but because a heuristic method of enquiry benefits most from a process of self-learning.

But that’s far from suggesting that you, or anyone else, need think those things or approach them in the manner I have. It’s of no import to me. But I’d prefer not to be forced to choose between two false choices of knowledge without experience or experience without knowledge, which in-and-if-themselves only seem to serve agendas, whether they be implicit or explicit.
 
Maybe we don’t see the world the same way. I consider “fidelity/accuracy/truth to the recording” to be neither ideologically robust nor conceptually possible, and for some of the reasons I’ve mentioned in this thread. I consider sine waves to be incredibly useful as tools for understanding electro-mechanical phenomena, but not to be wholly equivalent with the musical waveforms of a time-based art form, which is problematic in light of the fact that there is no perfectly precise measurement of time. I consider calling something “linear” when it is not to be a form of exaggeration, and intellectually dishonest. I consider a reductionist methodology for choosing components to contain little utility value apropos a mechanism designed primarily to elicit emotions in the listener. Most of all, I consider progress to be non-linear in almost all spheres of life. That is: For every breakthrough and improvement we make in one domain, very often there is a tradeoff in another.

Not trying to nitpick. Just trying to better understand where you are coming from. But, also to disagree on that one paragraph in particular.

I am not sure why you reject “fidelity/accuracy/truth to the recording” as lacking ideological robustness. Yes, I agree there are practical problems in knowing exactly what sounds were recorded relative to the live source, starting with the mics. That may be only approximate and subject to skillful choices by recording engineers. But, I think the concept of faithfulness and fidelity to the source is still a good one at every single intermediate step in the recording and reproduction chain, even if we cannot perfectly achieve it in practical terms. Nothing is perfect, but what are the ideals we strive for? And, if fidelity to the input source, step by step for each component, is not the ideal, I am not clear from your posts what the replacement ideal should be or why that would be more robust.

Yes, there is no perfectly precise measure of time. There is no perfectly precise measurement of anything. But, when do the imperfections in precision become undetectable by humans? Surely, human perception is not infinite in its powers of resolution. In audio, there is a threshold of audibility, which can be scientifically arrived at through experimentation. That science, of course, is debated, peer reviewed, experimentally duplicated and verified, so it evolves over time, leading to more accurate and more precise determinations. Eventually, audio equipment becomes close to or beneath the threshold of audibility so that it is "good enough". I am not saying we are there yet in all instances, even with state of the art equipment. But, we have gotten much closer in my lifetime, even noticeably to me within the last decade, as far as I can tell from measurements I see and from my own subjective impressions.

We can further debate whether audio recording and reproduction is "designed primarily to elicit emotions in the listener". Which emotions specifically at what point in a recording and how do we know whether the system is accurately delivering them, if that is indeed its true design objective?

Of course, I disagree. Audio recording and reproduction systems are designed to capture and transmit sound as accurately as possible, ideally to get out of the way of the music, speech or other sounds. The emotion is conveyed by the music or artistic content at a higher psychic level than the sound that carries that content to our lower level sensory inputs, our ears.

The sound is merely the highway carrying the vehicles containing the artistic message and emotions. The perception of the sound is also more consistent, though not identical, from person to person, whereas the perception of the artistry and emotional responses vary all over the place, even in the same person on repeated listening. Many totally external factors affect our moods, emotions and receptivity to the artistic message. And, those responses to the art and the emotions they trigger do not necessarily need first rate sound transmission in order to be appreciated and enjoyed. Though, I grant you, the best and most involving experiences of the art and emotion of music are achieved with the best sound. I would not be here if I did not believe that.
 
Let me encourage you - again - to ask someone else, other than me, who actually designs and implements concepts regarding audio components (many of whom frequent this very site), rather than persist with an assumption that they “must” engage in a process that appears to offend your sensibilities.

I don't need to ask; I know if they are competent engineers they measure. And I know they listen. Listening doesn't "offend my sensibilities," it is necessary as well.

I’m not asking anyone to believe anything. I have little interest in “proving” something, and zero interest in belief.
If I'm not mistaken, the thread is titled “Introspection and hyperbole control”. Stating a system is linear when it cannot be, or that a component could have a “real world distortion level of -10000000dB” seems a little like hyperbole to me. However, it’s fully possible I may have a different perspective than others on what is hyperbole and what is not.

I don't think there is anyone here who doesn't understand that no system can be perfectly linear, which is why I think everyone here understands how that term is used when discussing audio.

Tim
 
Not only that, we loose a whole bunch and gain some new ones, too. Win!

Nine Inch Nails “Satellite” Audiophile Master 24/48 WAV
View attachment 22195

Nine Inch Nails “Satellite” Audiophile Master 16/44.1 192 Kbps MP3
View attachment 22196
This is all unrelated to the point I was making. Every lossy audio codec has three stages:

1. Conversion to frequency domain.
2. Reduction of resolution (and filtering in the case of MP3/low bit rate) by psychoacoustics measures.
3. Lossless compression of frequency coefficients.

You can delete #2 and what you get back is what you put in. This demonstrates that individual frequencies do make up your music which was the discussion at hand. Not what lossy compression does to audio (i.e. #2).

Discussing that part now, you cannot use this type of visual tool to analyze fidelity loss. The tool is psychoacoustically blind. If the fire alarm goes off in your office, you won't be hearing the sound of your computer fans. I can make a recording of that situation and take out the fan noise or leave it in there and both would sound the same to you. Yet, a spectrogram would most definitely show the difference.

Such measurements are popular on the Internet because of ease of creating them but they generate meaningless data (sans the filtering). This is why for tests of lossy audio codecs the industry/research 100% uses listening tests. If such measurements worked, we would not waste our time with slow listening tests.

So if you are going to complain about loss of fidelity in lossy codecs, you need to show the results of a listening test. Unfortunately vast majority of population including audiophiles fail such tests. Which is as it should be because the lossy codec "knows" what it can take out that should not be audible. Therefore the odds are against most people hearing such massive data reduction (as much as 90% thrown away).

If you doubt what I just said, try to see if you can pass these tests: http://www.whatsbestforum.com/showt...unds-different&p=279463&viewfull=1#post279463

Anyway, the theory of decomposition of music into frequency domain and a collection of individual frequencies cannot be doubted. You just used it yourself in that spectrogram. :)
 
I don't need to ask; I know if they are competent engineers they measure. And I know they listen. Listening doesn't "offend my sensibilities," it is necessary as well.

Hi Tim,

I don’t have much time, but I’m curious as to who “they” might be exactly?

If “they” do, as you say you know, measure, and do, as you say you know, listen, could you tell me what “they” are measuring and what they’re listening to, because a quick glance at the huge disparity of how components measure in any given category of components (speakers, for instance) seems to indicate either, they’re not all measuring the same things, or they’re not all listening for the same things, wouldn’t it?

It could of course be that all engineers are incompetent, given the level of non-linearity of all real-world speakers, or that, in the real world, speaker designers are human beings exercising preference for certain compromises, and how a speaker measures in an anechoic room on- and off-axis is not considered the definitive measure of the worth of a speaker, designed to play music in real-world rooms.

I don't think there is anyone here who doesn't understand that no system can be perfectly linear, which is why I think everyone here understands how that term is used when discussing audio.

So when someone uses the term “linear” to describe a mechanism that is not and cannot be linear, that’s fine, and yet, when someone uses the term “musical” apropos a mechanism designed to play back music, they’re berated for using meaningless and indefinable terminology? Which one is more hyperbolic to you?

--

amirm said:
This is all unrelated to the point I was making. Every lossy audio codec has three stages:

Actually Amir, it was you who introduced lossy codecs into the discussion, not me, (previous to your post, no one mentioned lossy codecs) unrelated to the point I was making which was to simply suggest that perhaps steady-state sine waves, defined as they are by curves that are smooth and repetitive in oscillation, are not wholly equivalent to musical waveforms which are anything but smooth and repetitive.

amirm said:
You can delete #2 and what you get back is what you put in. This demonstrates that individual frequencies do make up your music which was the discussion at hand. Not what lossy compression does to audio (i.e. #2)….

…Anyway, the theory of decomposition of music into frequency domain and a collection of individual frequencies cannot be doubted. You just used it yourself in that spectrogram.

No, that wasn’t the discussion at hand, that’s your interpretation of the discussion at hand, and what’s more, appears to be an effort by you to drag it into a domain in which you have the greater experience (lossy codec development):

amirm said:
Take any music encoder, AAC, MP3, WMA, etc… What comes out of that system is clearly music which demonstrates that decomposition into individual frequencies works as theory proves it does.

The discussion at hand was to compare the symmetrical nature of sine waves with the the asymmetrical nature of a musical waveform. Furthermore, it was not to suggest that a musical wave form could not be deconstructed into a collection of individual frequencies, but that music, even when analysed as either a waveform or as a spectrogram, is more than just its constituent frequencies, given that music is always amplitude and pitch over time and constantly modulating.

amirm said:
So if you are going to complain about loss of fidelity in lossy codecs, you need to show the results of a listening test. Unfortunately vast majority of population including audiophiles fail such tests. Which is as it should be because the lossy codec "knows" what it can take out that should not be audible. Therefore the odds are against most people hearing such massive data reduction (as much as 90% thrown away).

If you doubt what I just said, try to see if you can pass these tests: http://www.whatsbestforum.com/showth...l=1#post279463

I’m not complaining, I’m comparing - that’s quite a significant distinction, and an important one (I’m listening to Spotify right now… no complaints).

As a side note, because again, I have no interest in telling anyone what to do or what to think, could I ask you if you wouldn’t mind refraining from being passive/aggressive in your posts? When you type that “if (I) am going to complain about loss of fidelity in lossy codecs, (I) need to show the results of a listening test” (emphasis mine) you come across as quite self-righteous, as it suggests to me that in order to participate on this forum, I should satisfy a demand you may have set for yourself apropos your own standards, but perhaps do not realise that I may not share the same standards, nor value yours in the way you do.

What’s more, I do not need to pass any test in order to doubt, question or challenge either your posts, ideas or experiences, and I certainly do not need to do what you say, just because you say it.

Nevertheless, I was quite interested in whether I’d be able to tell the difference, and here’s my results, using ABXTester (I’m on a Mac), with two of the test files from the AVS_AIX_HRA_Test_Files_2 folder (Mosaic_A2.wav and Mosaic_B2.wav). I’m not sure these are the files I was “supposed” to use, but I did the test three times, and the screenshots are posted below.


ABXTest 1.jpg

ABXTest 2.jpg

ABXTest 3.jpg



I have no idea what that “proves”, if anything (that I’m not perfect?). But again, I’m not here to prove anything, anyway, not to myself, and not to anyone else. But I was curious and ran the test. It’s entirely possible my methodology was flawed in using the same two tracks three consecutive times, but the triangle strike at 0:50 tended to be the one element that allowed me to identify what I thought were the most discernable differences between the two tracks, and perhaps if I find more time I’ll try the other two tracks.
 
Not trying to nitpick. Just trying to better understand where you are coming from. But, also to disagree on that one paragraph in particular.

I am not sure why you reject “fidelity/accuracy/truth to the recording” as lacking ideological robustness. Yes, I agree there are practical problems in knowing exactly what sounds were recorded relative to the live source, starting with the mics. That may be only approximate and subject to skillful choices by recording engineers. But, I think the concept of faithfulness and fidelity to the source is still a good one at every single intermediate step in the recording and reproduction chain, even if we cannot perfectly achieve it in practical terms. Nothing is perfect, but what are the ideals we strive for? And, if fidelity to the input source, step by step for each component, is not the ideal, I am not clear from your posts what the replacement ideal should be or why that would be more robust.

Yes, there is no perfectly precise measure of time. There is no perfectly precise measurement of anything. But, when do the imperfections in precision become undetectable by humans? Surely, human perception is not infinite in its powers of resolution. In audio, there is a threshold of audibility, which can be scientifically arrived at through experimentation. That science, of course, is debated, peer reviewed, experimentally duplicated and verified, so it evolves over time, leading to more accurate and more precise determinations. Eventually, audio equipment becomes close to or beneath the threshold of audibility so that it is "good enough". I am not saying we are there yet in all instances, even with state of the art equipment. But, we have gotten much closer in my lifetime, even noticeably to me within the last decade, as far as I can tell from measurements I see and from my own subjective impressions.

We can further debate whether audio recording and reproduction is "designed primarily to elicit emotions in the listener". Which emotions specifically at what point in a recording and how do we know whether the system is accurately delivering them, if that is indeed its true design objective?

Of course, I disagree. Audio recording and reproduction systems are designed to capture and transmit sound as accurately as possible, ideally to get out of the way of the music, speech or other sounds. The emotion is conveyed by the music or artistic content at a higher psychic level than the sound that carries that content to our lower level sensory inputs, our ears.

The sound is merely the highway carrying the vehicles containing the artistic message and emotions. The perception of the sound is also more consistent, though not identical, from person to person, whereas the perception of the artistry and emotional responses vary all over the place, even in the same person on repeated listening. Many totally external factors affect our moods, emotions and receptivity to the artistic message. And, those responses to the art and the emotions they trigger do not necessarily need first rate sound transmission in order to be appreciated and enjoyed. Though, I grant you, the best and most involving experiences of the art and emotion of music are achieved with the best sound. I would not be here if I did not believe that.

Hi Fitz,

Thanks for taking the time to reply. I really appreciate your thoughts, and when I'm not so pushed for time will try and put together some thoughts of my own.

Cheers.
 
Actually Amir, it was you who introduced lossy codecs into the discussion, not me, (previous to your post, no one mentioned lossy codecs) unrelated to the point I was making which was to simply suggest that perhaps steady-state sine waves, defined as they are by curves that are smooth and repetitive in oscillation, are not wholly equivalent to musical waveforms which are anything but smooth and repetitive.
Let me explain the argument again: all lossy codecs rely on representation of single tone sine waves in each frame of audio. The bit stream that they transmit are the coefficients for those sine waves (really cosine but that is not important here). That they sound completely like music with all of its complexities proves that you can indeed represent music in either time domain with its complex visual shape, or in frequency domain with orderly set of frequencies.

Note that each frame of audio is made up of a different set of coefficients for those frequencies. So they don't stay the same and keep repeating. No one is saying you music is one set of sine waves that never change. If that were so, we could compress any piece of music down to almost nothing!

What is said is that for any selection of audio samples, we can decompose them into fundamental frequency components. It matters not how complex they look to your eyes in time domain. The math allows that decomposition. Or again, no lossy codec would sound like music when they clearly do. Complexity and all.

Now, there can be issues such as windowing that cause some degradation of sound but that is not for the reasons you mention, i.e. music is too complicated to be decomposed.
 
Fitzcaraldo215 said:
Not trying to nitpick. Just trying to better understand where you are coming from. But, also to disagree on that one paragraph in particular.


Thanks, Fitz, I appreciate your genial approach, it makes for a much more pleasant discussion.

I am not sure why you reject “fidelity/accuracy/truth to the recording” as lacking ideological robustness. Yes, I agree there are practical problems in knowing exactly what sounds were recorded relative to the live source, starting with the mics. That may be only approximate and subject to skillful choices by recording engineers. But, I think the concept of faithfulness and fidelity to the source is still a good one at every single intermediate step in the recording and reproduction chain, even if we cannot perfectly achieve it in practical terms. Nothing is perfect, but what are the ideals we strive for? And, if fidelity to the input source, step by step for each component, is not the ideal, I am not clear from your posts what the replacement ideal should be or why that would be more robust.

Firstly, because fidelity, accuracy and truth are concepts worthy of a thousand writers producing a thousand books in-and-of-themselves, and even then, we might be only scratching the surface. Only one of those - accuracy - has any measurable way of being quantified, and often imperfectly, say when talking about the delineation of time. Fidelity and truth - holy moly - they’re socially-prescribed ideological abstract notions that, while universally acknowledged to be essential to the human experience, are almost impossible to realize, let alone concretely define relative to our cultural, religious, political and social practices.

Secondly, apropos the recording process, it has many variables that, as you intimate, cannot be reduced down to merely idealized practices. Like you say, every aspect of the recording process is subject to choices made by human beings based on their own internal volition, and wholly disparate from any external concept of how music “should” be recorded. There’s no standard. Mic selection and mic placement alone will vary from engineer to engineer, and from session to session.

That’s to say, every engineer has their own set of biases which they exercise on each session, dependent on variables of which many are not under their control. We know a Gibson will have different tone and timbre to a Martin acoustic, and will therefore perhaps require either a different mic, or set of mics, or mic placement, or instrument placement within the room, and those things will vary depending on the person playing the instrument, and potentially, the song, and the acoustic guitar track’s ultimate placement within the mix.

Yes, I’m aware there are engineers who like to say they are going for the most “faithful/accurate/truthful” representation they can achieve, but I don’t know of any who use an externalised objective reference for doing so - they use their ears and experience. That is: Every engineer is using a subjectivized internal rationale in the recording process, even if they are starting with mics and mic-pres that are objectively low in self-noise and non-linearities. (Consider the subjectivized process of selection between mics that are relatively low in self-noise and non-linearities, of which there is no agreed upon reference, the various contenders not limited to DPA, Earthworks, Schoeps, Josephson, Microtech-Gefell, et al, and those are just the SDCs.) Even in the most stripped-down and minimalist session, there’s no universally agreed upon standard for miking a cello, what mic to use, what mic-pre to chose, what converter, what DAW software, and what monitoring chain one employs, though this is strictly not in the recording chain but nevertheless informs the decision-making process. One record label’s “ultra-minimalist” setup will differ from another label’s setup. And this is all before it’s mixed (or not) and mastered, and by whom. A recording given to one mastering engineer will be subject to preferences that differ from the preferences of another. These are all decisions made by human beings in the moment, and most of them are doing it relative to individual preference. A recording is a subjectivized process, very rarely if ever an objectivized one, and overwhelmingly governed by individualized preference, and often only that.

Of course, we end up with a finalized product, “the recording”, but even that can be remixed, remastered and/or individually mastered for a specific medium or format, and always by a human being exercising individualized preference.

Is it not the same for all components? Human beings conceive them, using a variety of practices, but at the end of the day, an Atmasphere amplifier is a component developed using a set of preferences (many of which can and are measurable), not dissimilar conceptually to other OTL amplifier manufacturers, but differing in implementation relative to, say, Berning, Einstein or Joule-Electra. Which one is more objectively “accurate”? Easy to measure. But which one do we buy? Well, our ideals seem to fall short here, and we usually buy what suits our preferences best, not limited to real-world considerations such as room size, electrical supply, familial situation, disposable income levels, dealer/manufacturer support and, sometimes, how well it our captures our psyche as an object of desire.

Should there be “a replacement ideal”? Perhaps, but I could not say what that might be, given that we seem to practice preference, even if that is not acknowledged as an ideal. So it’s difficult for me to consider “fidelity/accuracy/truth to the recording” to be much of a robust ideal when we quote Toole over and over and buy various iterations of stats, planars, dipoles, bipoles, omnis, hybrids with AMTs, horns, and open baffle speakers as a practice, to use just one example. If “fidelity/accuracy/truth to the recording” truly was a universal ideal among audiophiles and music-lovers, then we should see a far greater degree of homogeneity in our real-world choices of components with the fewest objectively measurable non-linearities. Even on this forum - surely, one of the most niche of niches - that’s not even half-true, our experiential practices contradict that notion every day. Even Tim, bless him, was willing to admit his ideology of “fidelity/accuracy/truth to the recording” to be conceptual, with no robust practice of ascertaining it other than his own internalised preferences. And I would hazard a guess that he is far from alone in that.


Fitzcaraldo215 said:
Yes, there is no perfectly precise measure of time. There is no perfectly precise measurement of anything. But, when do the imperfections in precision become undetectable by humans? Surely, human perception is not infinite in its powers of resolution. In audio, there is a threshold of audibility, which can be scientifically arrived at through experimentation. That science, of course, is debated, peer reviewed, experimentally duplicated and verified, so it evolves over time, leading to more accurate and more precise determinations. Eventually, audio equipment becomes close to or beneath the threshold of audibility so that it is "good enough". I am not saying we are there yet in all instances, even with state of the art equipment. But, we have gotten much closer in my lifetime, even noticeably to me within the last decade, as far as I can tell from measurements I see and from my own subjective impressions.


I agree. But the measurements we use for objectively quantifying the non-linearities of our components are well-known, and almost without exception universally accepted and adopted. What is detectable by humans or not is still a nascent science, given we are discovering hearing is not an isolated experience, tied as it is to our perception, emotion and memory, and the research is still ongoing. So we’ve not yet reached an equilibrium between the two sciences, or indeed the two mechanisms at work. Once we better understand the non-linear human hearing mechanism and its relationship to our neurobiological processes, I think we might gain greater confidence in exploring the relationship between a time-based art form being played back via a non-linear mechanism of interdependency and its effect on our perception of it, relative to the measurements of both mechanisms, one electro-mechanical in nature, the other neurobiological.


Fitzcaraldo215 said:
We can further debate whether audio recording and reproduction is "designed primarily to elicit emotions in the listener". Which emotions specifically at what point in a recording and how do we know whether the system is accurately delivering them, if that is indeed its true design objective?


I don’t think we do know. There’s very little data available. But I consider this alone to be the one of the most fundamental and under-researched areas of our experience that could do with further robust scientific exploration. “To accurately deliver the emotions captured in the recording” could, of course, become a new ideal to strive for, (given that every musician I’ve ever met has never contradicted the practice of telegraphing some aspect of their personhood into the recording process, and humans are vary rarely not emotional when making music) but until the research is able to consistently verify this via our neurobiological mechanism while listening to music via various reproduction mechanisms and a correlation established of emotional veracity, we’ll only be raising up another ideal as unobtainable as the other.


Fitzcaraldo215 said:
Of course, I disagree. Audio recording and reproduction systems are designed to capture and transmit sound as accurately as possible, ideally to get out of the way of the music, speech or other sounds.


For many of the reasons I mention above - firstly, that while “accuracy” may be the claim of minimalist labels and engineers, they’re not measuring that objectively, only subjectively, and secondly, that the recording/reproduction chain is a non-linear and interdependent mechanism - I think the best that we could say is that “audio recording and reproduction systems are designed to capture and transmit music, speech and other sounds”, with no other qualifiers or secondary aims. Again, this is just my current thinking, and I have no interest in attempting to modify anyone else’s thinking on it.


Fitzcaraldo215 said:
The emotion is conveyed by the music or artistic content at a higher psychic level than the sound that carries that content to our lower level sensory inputs, our ears.

The sound is merely the highway carrying the vehicles containing the artistic message and emotions. The perception of the sound is also more consistent, though not identical, from person to person, whereas the perception of the artistry and emotional responses vary all over the place, even in the same person on repeated listening. Many totally external factors affect our moods, emotions and receptivity to the artistic message. And, those responses to the art and the emotions they trigger do not necessarily need first rate sound transmission in order to be appreciated and enjoyed. Though, I grant you, the best and most involving experiences of the art and emotion of music are achieved with the best sound. I would not be here if I did not believe that.


I don’t find a lot to disagree with there, Frantz. Like I say above, and am really rehashing here as previously stated in Peter A’s “Audio Science” thread, the neurobiological mechanism is still being explored and my personal thinking is that rather than shoving ever-more-forceful objective measurements of the reproduction mechanism down each other’s throats to suck on, I’d really like to see more robust scientific research like that conducted by Levitin Labs being applied to how the non-linear reproduction mechanism affects our non-linear human hearing mechanism, and whether greater degrees of objectively measured non-linearities do indeed correlate with greater neurobiological activity from the listener, and if so, why is there such a disparity among us as to the systems we assemble, the formats we use, and the technologies we employ. “Loving distortion and colouration” is a well-worn trope that suits some who prefer a reductionist view of the music reproduction mechanism, but to me, it’s nothing more than a dismissive and defensive posture in order to hang onto a world-view that prefers what we think we know to what we still need to know.

Again, thanks for the interesting discussion.
 
INTROSPECTION AND HYPERBOLE CONTROL REPORT

I was disappointed to perceive in Steven Plaskin’s review in Audiostream (August 27, 2015) of several Shunyata Research products a lack of introspection and an overdose of hyperbole. (Please note that nothing herein is intended to be a comment on any Shunyata Research product. I appreciate the sonic improvements resulting from the use of Shunyata Research products, and the great respect many WBF members have for them. I am commenting solely on Steven’s review style.)

In the original post on this thread I asked members to be careful about hyperbole and unintended exaggeration, and to think long and hard about the sonic difference one is evaluating and the magnitude of the sonic difference one is evaluating, and the relative importance of the change in question versus the “before” sound of one’s system overall. These admonitions apply a fortiori to reviewers.

“[T]he HYDRA TRITON V2 provided a new acoustic experience for my music that was truly exemplary.”

A whole new experience? Really? What was your prior experience? What is the magnitude of the change? Compared to what? If this was “truly exemplary” then what was the sound of your system before? What other specific points exist on your spectrum of “exemplary” besides “truly”?

“In fact, the reproduction of high end detail and transients were the best I have yet heard from any power conditioner.”

How many other power conditioners have you heard? Ten? Twenty? Twenty-five percent of all high-end audio power conditioners commercially available? Which other power conditioners have you heard? How does each of them compare sonically to the Shunyata product?

Steven’s statement that it is the “best I have yet heard” tells us almost nothing. How does he even know? Does he really have the perfect aural memory to discern how the power conditioners he has auditioned previously, undoubtedly in different systems and in different listening rooms, sounded compared to what he hears from his current system with the Triton v2? What is the point of a reviewer making such a vague and idiosyncratic comment? How does that help us understand anything useful? He does not give us any frame of reference. His opinion is expressed in a vacuum, and refers only to his personal and unexplained frame of reference.

“The HYDRA TRITON v2 reproduces voices and music in the most natural and relaxed manner I have yet experienced.”

Compared to what? What are the components of all the other systems you have listened to (and I hope you reviewed them all in your current listening room to make comparisons even remotely valid)? How do you define natural? How do you define relaxed? What is your spectrum of natural to unnatural and of relaxed to stressed? Where do these components fall on these spectrums? Where do the other power conditioners you have reviewed fall on these spectrums? (And since when does a power conditioner reproduce anything?)

“I also experienced a more lifelike sense of instrumental body and weight with the TRITON v2.”

What does “more lifelike” mean? More lifelike than what? How much more lifelike? Is “lifelike” a sonic attribute on a spectrum which begins on one end at “corpse” and continues to “full resurrection” at the other end? What were the lifelike-ness levels of the other power conditioners you have reviewed?

“The spatial resolution of well recorded music is truly world-class with the TRITON v2.”

What is the definition of “world-class”? Does this mean you have surveyed, under controlled and identical conditions and in the same listening room, a variety of other “world-class” power conditioners? Which ones were they? How do you compare and contrast them to the Triton v2?

“The soundstage appears to be richly layered with immediacy and palpability.”

What does “richly layered” actually mean? Richly layered compared to what? How much more layered is “richly layered” than “moderately layered” or “thinly layered”? How do you define “immediacy”? How do you define “palpability”? Where on the immediacy spectrum and on the palpability spectrum does this product fall compared to competing products?

I could go on and on, but you probably understand my disappointment by now.
 
Last edited:
INTROSPECTION AND HYPERBOLE CONTROL VIOLATOR OF THE MONTH: (...)
I could go on and on, but you probably understand my disappointment by now.


Ron,

Curiously I find that this review accomplishes what I expect from a subjective review. I clicked the author name and I could see his system and read his previews reviews so I could get his preferences and style. The reviewer clearly separates his work from reproduction of the Shunyata literature. It states the aspects he felt were relatively improved and refers to a few exact recordings I fortunately own that I could use to check these aspects if I wanted to listen for myself. It is hyperbolic enough to attire my attention. IMHO in order to learn more I would need to listen for myself. But as we always say, surely YMMV.
 
Dear Francisco, It is indeed interesting that we have starkly different reactions to the style of his review.
 
Last edited:
... The discussion at hand was to compare the symmetrical nature of sine waves with the the asymmetrical nature of a musical waveform. Furthermore, it was not to suggest that a musical wave form could not be deconstructed into a collection of individual frequencies, but that music, even when analysed as either a waveform or as a spectrogram, is more than just its constituent frequencies, given that music is always amplitude and pitch over time and constantly modulating. ...

I thought that sound came from pressure variations in the air. Microphones converted these variations into electrical signals whose amplitude varied over time. These signals are then recorded (and played back) using various technologies with different degrees of accuracy and other characteristics.

Accurately recording and reproducing amplitude over time seems to me sufficient. (Mathematical transformations along the way can be quite useful.) If you're doing this well for the "sound", the "music" (and its pitches) should come along naturally.
 
Dear Francisco, It is indeed interesting that we have starkly different reactions to the style of his review.

Fortunately. Otherwise people could think we are just being influenced by the vapors of the great MB750 and the big boxes of the Transparent Audio XL-V ! ;)
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu