Introspection and hyperbole control

Thanks 853guy. Maybe we need a physics of audio forum.

Tim
 
Well, we need to decide whether we’re discussing a problem of mathematics or a problem of application. In mathematics or physics, linearity is easy to measure, because we have an equation defining exactly what it is and what it is not, and the robustness of that equation means we can represent the relationship between two variables graphically and get a straight line, right?There are countless examples, none of which need to be listed here, as I’m sure you covered this in grade school. So far, it doesn’t appear we have a mathematics problem.
That would be a discussion of theoretical physics and math. Applied math and physics would most definitely include measurements to prove the same. To wit, theoretical physicist postulate many things that then applied physicists try to prove by experimentation and measurements. Higgs thought a Boson must exist and showed that in theory. CERN was built with billions of dollars that with experimentation and measurements eventually provided (by high probability) that such a particle does exist. It could have very well shown that it didn't.

In audio discussions, we are in applied domain as users, not theory.
 
That would be a discussion of theoretical physics and math. Applied math and physics would most definitely include measurements to prove the same. To wit, theoretical physicist postulate many things that then applied physicists try to prove by experimentation and measurements. Higgs thought a Boson must exist and showed that in theory. CERN was built with billions of dollars that with experimentation and measurements eventually provided (by high probability) that such a particle does exist. It could have very well shown that it didn't.

In audio discussions, we are in applied domain as users, not theory.

Actually, in the initial post I did before a quick edit, I had the word "description" rather than "application", i.e.; "Well, we need to decide whether we’re discussing a problem of mathematics or a problem of description". That is, is there a problem with the maths apropos linearity, or are we using the word "linearity" improperly to describe how a real-world system measures.

I've now changed it in the original post - hope that clarifies it.
 
Actually, in the initial post I did before a quick edit, I had the word "description" rather than "application", i.e.; "Well, we need to decide whether we’re discussing a problem of mathematics or a problem of description". That is, is there a problem with the maths apropos linearity, or are we using the word "linearity" improperly to describe how a real-world system measures.

I've now changed it in the original post - hope that clarifies it.

I'm curious about whether or not you think thorough sine wave measurement is indicative of the performance of equipment when playing music. I didn't see the answer to that question anywhere in there.

Tim
 
T ... A time-based art form like music, if it is to be recorded, needs to capture two variables (pitch and amplitude) against one constant (time). That is, music is a relationship between three entities in which two are modulating and one is constant, and can be graphically represented as a waveform or literally represented as a groove in a record. ...

Can you explain this in more detail? In your description above, you say that pitch and amplitude against time can be "graphically represented as a waveform". This only represents two entities, amplitude and time. Granted, you can extract the pitches and amplitudes of the components of the waveform using a Fourier transform. But when talking about (nominally) linear systems, does it not more sense to talk in amplitude/time?
 
Can you explain this in more detail? In your description above, you say that pitch and amplitude against time can be "graphically represented as a waveform". This only represents two entities, amplitude and time. Granted, you can extract the pitches and amplitudes of the components of the waveform using a Fourier transform. But when talking about (nominally) linear systems, does it not more sense to talk in amplitude/time?

A single-tone waveform will produce amplitude over time, yes. Like I mention above, the formula for a sine wave is: x*y(t) = Asin(?t +*?) where A is the amplitude, ? the frequency and ? the phase as a function of time (t). While we may not be able to deduce the exact pitch from a cursory glance, the number of oscillations per subdivision of time will allow us to hazard a guess at its relative pitch - the less oscillations per subdivision, the lower the pitch, the more oscillations, the higher the pitch. What’s more, a waveform that contains more than one tone at the same amplitude will differ significantly from that of a single steady-state tone, so we can immediately deduce that there’s more than one pitch at play (and in the case of the example below, three). While we may not be able to deduce much more, we can as you say use a Fourier transform to extract specific information about the pitches, and perhaps I should have added that in the original post. That’s what you get for hasty typing I guess.

Phelonious Ponk said:
I'm curious about whether or not you think thorough sine wave measurement is indicative of the performance of equipment when playing music. I didn't see the answer to that question anywhere in there.

Let’s not worry about “thorough” for now, let’s just keep it simple.

Here’s three tones - 69Hz, 139Hz and 207Hz equivalent to C#2, C#3 and G#3 respectively - generated as sine waves and combined together as a single (mono) waveform:


Screen Shot 2015-09-04 at 10.32.32 pm.png


Here’s the same three notes played by a human being on a piano recorded in the studio with close and ambient mics from a well-known piece of music, also combined to a single (mono) waveform (1):


Screen Shot 2015-09-04 at 10.32.18 pm.png


Already it’s apparent there are some significant differences which are obvious from the appearance of the waveforms. The pure tones generated in the first example have relatively symmetrical shapes showing smooth repetitive oscillation. The second example is far less symmetrical in shape and the transitions from oscillation to oscillation are much less smooth and consistent, which is to be expected given that not only will the mics be picking up direct and in-direct sound, and at multiple positions relative to the piano affecting the overall phase relationship of the mixed signal, but the piano itself will be generating its own self-noise and harmonic overtones from its multiple strings, frame and soundboard/casing.

The combination of these factors is why steady-state sine waves lack the complexity of those produced by musical instruments. In music we cannot ever hear just the individual tones/pitches - we must always also hear the instrument that produced those tones, the effect on the instrument of the room it was in (unless close-miked in hypercardioid, and even then…), the room itself, not to mention the recording chain and its self-noise. Even if we were to make music solely in the digital realm using sampled instruments (bypassing the room, mic, and mic-pre altogether) the simple fact that all instruments have an inherently complex timbral character of their own when producing the same note(s) means they’re not ever going to look like steady-state tones. This is especially true of instruments that are not tuned to intervallic pitches (cymbals, drums, etc).

We are, after all, talking about “the performance of equipment when playing music”, right? The above piece of music I reference in the second image is about as simple as it gets - the first three notes of the composition played in unison on a piano. No strings, no brass, no percussion and no Strat through a Boss Tu-2, Keeley Katana, Klon Centaur, Ibanez TS808, Electro-Harmonix XO Q-Tron, Pete Cornish Tape Echo, Ross Phaser/Distortion and a Way Huge Aqua-Puss into a Dumble, Two Rock and Fender Band-Master through various Celestions (2).

Just those three notes on the same piano - taken from a slice of time 0.05 seconds in length - is already more complex than those same three tones generated as sine waves over the same time-frame. And that’s not even taking into account the fact that in the actual piece of music as performed, the C#2, C#3 as sustained tones decay in amplitude underneath an arpeggiated triplet that moves from G#3, C#4 to E4 rhythmically until a chord change to B minor.

Music can, of course, be analyzed as a waveform. But that doesn’t make it into a sine wave, no matter how many multiples you might have.


(1) Congrats to those who’ve guess what the piece of music might be. Given the score is marked pianissimo here, the piano is played very quietly, and the S/N ratio of the recording chain itself can become problematic. Because many of the favoured interpretations of this work are 20th century renditions recorded mostly in analogue with its associated tape hiss which itself can introduce low-level harmonics, I chose the Pentatone Classics version with Mari Kodama, recorded in DSD with the Meitner ADC, which is subjectively the least compromised in relative sound quality of the ones I’m familiar with, the slightly hollow and phasey piano sound notwithstanding.

(2) Touring guitar rig of John Mayer, Esquire, circa 2014, subject to change without notice.
 
Music can, of course, be analyzed as a waveform. But that doesn’t make it into a sine wave, no matter how many multiples you might have.
Take any music encoder, AAC, MP3, WMA, etc. They all work in frequency domain. The take your music with its arbitrary waveform, decompose it in to individual frequencies that make it up (using DCT), perform data reduction and then convert it all back to time domain in decoder/player. What comes out of that system is clearly music which demonstrates that decomposition into individual frequencies works as theory proves it does.
 
A single-tone waveform will produce amplitude over time, yes. Like I mention above, the formula for a sine wave is: x*y(t) = Asin(?t +*?) where A is the amplitude, ? the frequency and ? the phase as a function of time (t). While we may not be able to deduce the exact pitch from a cursory glance, the number of oscillations per subdivision of time will allow us to hazard a guess at its relative pitch - the less oscillations per subdivision, the lower the pitch, the more oscillations, the higher the pitch. What’s more, a waveform that contains more than one tone at the same amplitude will differ significantly from that of a single steady-state tone, so we can immediately deduce that there’s more than one pitch at play (and in the case of the example below, three). While we may not be able to deduce much more, we can as you say use a Fourier transform to extract specific information about the pitches, and perhaps I should have added that in the original post. That’s what you get for hasty typing I guess.



Let’s not worry about “thorough” for now, let’s just keep it simple.

Here’s three tones - 69Hz, 139Hz and 207Hz equivalent to C#2, C#3 and G#3 respectively - generated as sine waves and combined together as a single (mono) waveform:


View attachment 22167


Here’s the same three notes played by a human being on a piano recorded in the studio with close and ambient mics from a well-known piece of music, also combined to a single (mono) waveform (1):


View attachment 22168


Already it’s apparent there are some significant differences which are obvious from the appearance of the waveforms. The pure tones generated in the first example have relatively symmetrical shapes showing smooth repetitive oscillation. The second example is far less symmetrical in shape and the transitions from oscillation to oscillation are much less smooth and consistent, which is to be expected given that not only will the mics be picking up direct and in-direct sound, and at multiple positions relative to the piano affecting the overall phase relationship of the mixed signal, but the piano itself will be generating its own self-noise and harmonic overtones from its multiple strings, frame and soundboard/casing.

The combination of these factors is why steady-state sine waves lack the complexity of those produced by musical instruments. In music we cannot ever hear just the individual tones/pitches - we must always also hear the instrument that produced those tones, the effect on the instrument of the room it was in (unless close-miked in hypercardioid, and even then…), the room itself, not to mention the recording chain and its self-noise. Even if we were to make music solely in the digital realm using sampled instruments (bypassing the room, mic, and mic-pre altogether) the simple fact that all instruments have an inherently complex timbral character of their own when producing the same note(s) means they’re not ever going to look like steady-state tones. This is especially true of instruments that are not tuned to intervallic pitches (cymbals, drums, etc).

We are, after all, talking about “the performance of equipment when playing music”, right? The above piece of music I reference in the second image is about as simple as it gets - the first three notes of the composition played in unison on a piano. No strings, no brass, no percussion and no Strat through a Boss Tu-2, Keeley Katana, Klon Centaur, Ibanez TS808, Electro-Harmonix XO Q-Tron, Pete Cornish Tape Echo, Ross Phaser/Distortion and a Way Huge Aqua-Puss into a Dumble, Two Rock and Fender Band-Master through various Celestions (2).

Just those three notes on the same piano - taken from a slice of time 0.05 seconds in length - is already more complex than those same three tones generated as sine waves over the same time-frame. And that’s not even taking into account the fact that in the actual piece of music as performed, the C#2, C#3 as sustained tones decay in amplitude underneath an arpeggiated triplet that moves from G#3, C#4 to E4 rhythmically until a chord change to B minor.

Music can, of course, be analyzed as a waveform. But that doesn’t make it into a sine wave, no matter how many multiples you might have.


(1) Congrats to those who’ve guess what the piece of music might be. Given the score is marked pianissimo here, the piano is played very quietly, and the S/N ratio of the recording chain itself can become problematic. Because many of the favoured interpretations of this work are 20th century renditions recorded mostly in analogue with its associated tape hiss which itself can introduce low-level harmonics, I chose the Pentatone Classics version with Mari Kodama, recorded in DSD with the Meitner ADC, which is subjectively the least compromised in relative sound quality of the ones I’m familiar with, the slightly hollow and phasey piano sound notwithstanding.

(2) Touring guitar rig of John Mayer, Esquire, circa 2014, subject to change without notice.

You didn't actually answer the question, but I'll take that as a no.

If you're right, I wonder how audio engineers and designers develop/improve/QC the equipment they build and sell. They must listen to every chip, every tube, every driver. These sub components are, after all, consistent or not based on measurements and chosen on specs. These builders must listen to every component before selling it, and throw those that don't sound like the reference component in the dumpster. Given all of that, high-end is a heck of a bargain.

Tim
 
Here’s three tones - 69Hz, 139Hz and 207Hz equivalent to C#2, C#3 and G#3 respectively - generated as sine waves and combined together as a single (mono) waveform:


View attachment 22167


Here’s the same three notes played by a human being on a piano recorded in the studio with close and ambient mics from a well-known piece of music, also combined to a single (mono) waveform (1):


View attachment 22168

Well, I do not see that you have proven anything, in spite of your caveats. Yes, of course, all acoustic instruments, and even most electronic ones, have their own complex spectrum of harmonics on top of the fundamental. So, comparing pure, electronically generated tones containing no harmonics with an actual instrument containing many is weak as a starting point, though you try to explain it. Why would you even start your argument with that obvious spectral mismatch?

But, sheer complexity of instrument and musical harmonic spectra, in and of itself, does not invalidate Fourier, though you claim, somehow, it does. Sure, it makes the Fourier analysis more complex in the process of trying to get down to the sine/cosine wave analytical components of the waveform, but if I have a choice between believing Fourier and believing you, hmmm. Fourier has lasted for a long time and you have not won the Nobel Prize, not yet, at least.
 
A single-tone waveform will produce amplitude over time, yes. ...

... as will any combination of sine waves, as you illustrated.
But the complexity of "real" music compared with sine waves is irrelevant to the point, that "the perfomance of the system when playing music" can be judged by measuring how accurately the "amplitude-over-time" relationship of the output of the system matches the input. You can accurately measure that with sine waves. (I'll grant you'll need more than one.) The goal is "a straight wire with gain". Of course, the devil is in the details of achieving that goal.

Sometimes even a single sine wave will tell us more than music. I often quote the example of playing a 1 KHz test tone off a test LP. Even on the best of turntables, it's not hard to tell the difference between it and a tone from a test CD or generator. The same imperfections are present in every piece of music played on that turntable, but we don't hear them. Or you can add some distortions to music, clearly audible when added to simple sine waves, that many listeners either cannot detect or even prefer.
 
So now, I’m back to thinking about the transfer funtion of an ideal amplifier. But I don’t see anyone outside of a few people on this forum (and possibly, other forums, I guess) arguing that the ideal amplifier actually exists, and that it can be linear in the real world. Groucho said:

As jkeny answered, this still remains the sticking point. Platonic idealism vs Aristotelan realism. One the one hand Groucho isn’t willing to accept an audio system isn’t linear from one end to the other, but introduces a qualifier of approximation in order to hold onto the concept of linearity, even as he admits they can’t literally be linear in the real world, like for instance when we connect our “ideal” solid state amplifier to an actual speaker and play music through it.

It is a perennial dilemma in discussions of this sort: state that a system is linear and open oneself up to the 'gotcha' that a real world distortion level of -10000000dB means it is not linear hence the argument is invalidated. Or to pre-empt the point with a qualification such as "to all intents and purposes", "not literally" etc. in which case one invokes the 'gotcha' oneself.

A real world amplifier does, indeed, distort the signal (as does the very air between speakers and ears) and to a higher level when driving a difficult load. It is a question of being sensible about what constitutes audible distortion, and also doing something about it. I do know that active speakers greatly reduce the load on the amplifier, so my system is active. Is yours? If not, why not? :)
 
in fact, theres lots of folks that don't like strict linearity.

How do they know? They have never heard a system that aims to be strictly linear* unless the system:
- was using a digital source
- used active, solid state amplification
- was using DSP to correct the drivers in the frequency and time domains
- didn't have any egregious problems with odd dispersion patterns
- was not using vented speakers
- was set up properly

Such systems are not very common! Much more likely that the audiophile has heard linear components in conjunction with other, far-from-linear components that revealed their own weaknesses more as a result.

* or as close as possible.
 
How do they know? They have never heard a system that aims to be strictly linear* unless the system:
- was using a digital source
- used active, solid state amplification
- was using DSP to correct the drivers in the frequency and time domains
- didn't have any egregious problems with odd dispersion patterns
- was not using vented speakers
- was set up properly

Such systems are not very common! Much more likely that the audiophile has heard linear components in conjunction with other, far-from-linear components that revealed their own weaknesses more as a result.

* or as close as possible.
To be fair, I don't think most of us know whether we have figured those things out either :).
 
To be fair, I don't think most of us know whether we have figured those things out either :).
jkeny said:
I know some folks that prefer their bass a bit muddy for example, they like that sound, etc.

I am not saying that the list above is unequivocally the best sounding (I think it is, but that is another matter). But if you were to try to put together a linear system then an argument could be made for each item in the list being the most linear of the alternatives. This would be paying no heed whatsoever to listening tests, but simply looking at objective measurements and/or design objectives. I mention the latter because often the measurements argument founders on the claim that it is impossible to measure everything. This is true, but by looking inside the black box it may be possible to dispense with measurements and yet to 'know' that a design is the most linear it can be.

Put it all together, and maybe one arrives at the Kii Three, mentioned in another thread.
 
I am not saying that the list above is unequivocally the best sounding (I think it is, but that is another matter). But if you were to try to put together a linear system then an argument could be made for each item in the list being the most linear of the alternatives. This would be paying no heed whatsoever to listening tests, but simply looking at objective measurements and/or design objectives. I mention the latter because often the measurements argument founders on the claim that it is impossible to measure everything. This is true, but by looking inside the black box it may be possible to dispense with measurements and yet to 'know' that a design is the most linear it can be.

Put it all together, and maybe one arrives at the Kii Three, mentioned in another thread.
I hear you but it seems that a bunch of us are arguing 853guy which seems a bit unfair. Let's take it a bit easy guys. :)
 
I hear you but it seems that a bunch of us are arguing 853guy which seems a bit unfair. Let's take it a bit easy guys. :)

I'm still trying to ascertain 853guy's position. When I do, then maybe I'll argue with him. But I'll be gentle. :)

Tim
 
I'm still trying to ascertain 853guy's position. When I do, then maybe I'll argue with him. But I'll be gentle. :)

Tim

that is the most honest, transparent to agenda, post, you have ever made. you ought to use it as your sig.;)

and I say that with the utmost of respect and affection.
 
that is the most honest, transparent to agenda, post, you have ever made. you ought to use it as your sig.;)

and I say that with the utmost of respect and affection.

Yeah, I'm not feeling the love...

Tim
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu