That would be a discussion of theoretical physics and math. Applied math and physics would most definitely include measurements to prove the same. To wit, theoretical physicist postulate many things that then applied physicists try to prove by experimentation and measurements. Higgs thought a Boson must exist and showed that in theory. CERN was built with billions of dollars that with experimentation and measurements eventually provided (by high probability) that such a particle does exist. It could have very well shown that it didn't.Well, we need to decide whether we’re discussing a problem of mathematics or a problem of application. In mathematics or physics, linearity is easy to measure, because we have an equation defining exactly what it is and what it is not, and the robustness of that equation means we can represent the relationship between two variables graphically and get a straight line, right?There are countless examples, none of which need to be listed here, as I’m sure you covered this in grade school. So far, it doesn’t appear we have a mathematics problem.
That would be a discussion of theoretical physics and math. Applied math and physics would most definitely include measurements to prove the same. To wit, theoretical physicist postulate many things that then applied physicists try to prove by experimentation and measurements. Higgs thought a Boson must exist and showed that in theory. CERN was built with billions of dollars that with experimentation and measurements eventually provided (by high probability) that such a particle does exist. It could have very well shown that it didn't.
In audio discussions, we are in applied domain as users, not theory.
Actually, in the initial post I did before a quick edit, I had the word "description" rather than "application", i.e.; "Well, we need to decide whether we’re discussing a problem of mathematics or a problem of description". That is, is there a problem with the maths apropos linearity, or are we using the word "linearity" improperly to describe how a real-world system measures.
I've now changed it in the original post - hope that clarifies it.
T ... A time-based art form like music, if it is to be recorded, needs to capture two variables (pitch and amplitude) against one constant (time). That is, music is a relationship between three entities in which two are modulating and one is constant, and can be graphically represented as a waveform or literally represented as a groove in a record. ...
Can you explain this in more detail? In your description above, you say that pitch and amplitude against time can be "graphically represented as a waveform". This only represents two entities, amplitude and time. Granted, you can extract the pitches and amplitudes of the components of the waveform using a Fourier transform. But when talking about (nominally) linear systems, does it not more sense to talk in amplitude/time?
Phelonious Ponk said:I'm curious about whether or not you think thorough sine wave measurement is indicative of the performance of equipment when playing music. I didn't see the answer to that question anywhere in there.
Take any music encoder, AAC, MP3, WMA, etc. They all work in frequency domain. The take your music with its arbitrary waveform, decompose it in to individual frequencies that make it up (using DCT), perform data reduction and then convert it all back to time domain in decoder/player. What comes out of that system is clearly music which demonstrates that decomposition into individual frequencies works as theory proves it does.Music can, of course, be analyzed as a waveform. But that doesn’t make it into a sine wave, no matter how many multiples you might have.
A single-tone waveform will produce amplitude over time, yes. Like I mention above, the formula for a sine wave is: x*y(t) = Asin(?t +*?) where A is the amplitude, ? the frequency and ? the phase as a function of time (t). While we may not be able to deduce the exact pitch from a cursory glance, the number of oscillations per subdivision of time will allow us to hazard a guess at its relative pitch - the less oscillations per subdivision, the lower the pitch, the more oscillations, the higher the pitch. What’s more, a waveform that contains more than one tone at the same amplitude will differ significantly from that of a single steady-state tone, so we can immediately deduce that there’s more than one pitch at play (and in the case of the example below, three). While we may not be able to deduce much more, we can as you say use a Fourier transform to extract specific information about the pitches, and perhaps I should have added that in the original post. That’s what you get for hasty typing I guess.
Let’s not worry about “thorough” for now, let’s just keep it simple.
Here’s three tones - 69Hz, 139Hz and 207Hz equivalent to C#2, C#3 and G#3 respectively - generated as sine waves and combined together as a single (mono) waveform:
View attachment 22167
Here’s the same three notes played by a human being on a piano recorded in the studio with close and ambient mics from a well-known piece of music, also combined to a single (mono) waveform (1):
View attachment 22168
Already it’s apparent there are some significant differences which are obvious from the appearance of the waveforms. The pure tones generated in the first example have relatively symmetrical shapes showing smooth repetitive oscillation. The second example is far less symmetrical in shape and the transitions from oscillation to oscillation are much less smooth and consistent, which is to be expected given that not only will the mics be picking up direct and in-direct sound, and at multiple positions relative to the piano affecting the overall phase relationship of the mixed signal, but the piano itself will be generating its own self-noise and harmonic overtones from its multiple strings, frame and soundboard/casing.
The combination of these factors is why steady-state sine waves lack the complexity of those produced by musical instruments. In music we cannot ever hear just the individual tones/pitches - we must always also hear the instrument that produced those tones, the effect on the instrument of the room it was in (unless close-miked in hypercardioid, and even then…), the room itself, not to mention the recording chain and its self-noise. Even if we were to make music solely in the digital realm using sampled instruments (bypassing the room, mic, and mic-pre altogether) the simple fact that all instruments have an inherently complex timbral character of their own when producing the same note(s) means they’re not ever going to look like steady-state tones. This is especially true of instruments that are not tuned to intervallic pitches (cymbals, drums, etc).
We are, after all, talking about “the performance of equipment when playing music”, right? The above piece of music I reference in the second image is about as simple as it gets - the first three notes of the composition played in unison on a piano. No strings, no brass, no percussion and no Strat through a Boss Tu-2, Keeley Katana, Klon Centaur, Ibanez TS808, Electro-Harmonix XO Q-Tron, Pete Cornish Tape Echo, Ross Phaser/Distortion and a Way Huge Aqua-Puss into a Dumble, Two Rock and Fender Band-Master through various Celestions (2).
Just those three notes on the same piano - taken from a slice of time 0.05 seconds in length - is already more complex than those same three tones generated as sine waves over the same time-frame. And that’s not even taking into account the fact that in the actual piece of music as performed, the C#2, C#3 as sustained tones decay in amplitude underneath an arpeggiated triplet that moves from G#3, C#4 to E4 rhythmically until a chord change to B minor.
Music can, of course, be analyzed as a waveform. But that doesn’t make it into a sine wave, no matter how many multiples you might have.
(1) Congrats to those who’ve guess what the piece of music might be. Given the score is marked pianissimo here, the piano is played very quietly, and the S/N ratio of the recording chain itself can become problematic. Because many of the favoured interpretations of this work are 20th century renditions recorded mostly in analogue with its associated tape hiss which itself can introduce low-level harmonics, I chose the Pentatone Classics version with Mari Kodama, recorded in DSD with the Meitner ADC, which is subjectively the least compromised in relative sound quality of the ones I’m familiar with, the slightly hollow and phasey piano sound notwithstanding.
(2) Touring guitar rig of John Mayer, Esquire, circa 2014, subject to change without notice.
Here’s three tones - 69Hz, 139Hz and 207Hz equivalent to C#2, C#3 and G#3 respectively - generated as sine waves and combined together as a single (mono) waveform:
View attachment 22167
Here’s the same three notes played by a human being on a piano recorded in the studio with close and ambient mics from a well-known piece of music, also combined to a single (mono) waveform (1):
View attachment 22168
A single-tone waveform will produce amplitude over time, yes. ...
So now, I’m back to thinking about the transfer funtion of an ideal amplifier. But I don’t see anyone outside of a few people on this forum (and possibly, other forums, I guess) arguing that the ideal amplifier actually exists, and that it can be linear in the real world. Groucho said:
As jkeny answered, this still remains the sticking point. Platonic idealism vs Aristotelan realism. One the one hand Groucho isn’t willing to accept an audio system isn’t linear from one end to the other, but introduces a qualifier of approximation in order to hold onto the concept of linearity, even as he admits they can’t literally be linear in the real world, like for instance when we connect our “ideal” solid state amplifier to an actual speaker and play music through it.
in fact, theres lots of folks that don't like strict linearity.
To be fair, I don't think most of us know whether we have figured those things out either .How do they know? They have never heard a system that aims to be strictly linear* unless the system:
- was using a digital source
- used active, solid state amplification
- was using DSP to correct the drivers in the frequency and time domains
- didn't have any egregious problems with odd dispersion patterns
- was not using vented speakers
- was set up properly
Such systems are not very common! Much more likely that the audiophile has heard linear components in conjunction with other, far-from-linear components that revealed their own weaknesses more as a result.
* or as close as possible.
To be fair, I don't think most of us know whether we have figured those things out either .
jkeny said:I know some folks that prefer their bass a bit muddy for example, they like that sound, etc.
To be fair, I don't think most of us know whether we have figured those things out either .
I hear you but it seems that a bunch of us are arguing 853guy which seems a bit unfair. Let's take it a bit easy guys.I am not saying that the list above is unequivocally the best sounding (I think it is, but that is another matter). But if you were to try to put together a linear system then an argument could be made for each item in the list being the most linear of the alternatives. This would be paying no heed whatsoever to listening tests, but simply looking at objective measurements and/or design objectives. I mention the latter because often the measurements argument founders on the claim that it is impossible to measure everything. This is true, but by looking inside the black box it may be possible to dispense with measurements and yet to 'know' that a design is the most linear it can be.
Put it all together, and maybe one arrives at the Kii Three, mentioned in another thread.
I hear you but it seems that a bunch of us are arguing 853guy which seems a bit unfair. Let's take it a bit easy guys.
I'm still trying to ascertain 853guy's position. When I do, then maybe I'll argue with him. But I'll be gentle.
Tim
that is the most honest, transparent to agenda, post, you have ever made. you ought to use it as your sig.
and I say that with the utmost of respect and affection.