Why Some Audiophiles Fear Measurements

Yeah totally agree Don even about measurements more accurate but the limitation-restriction is the test procedure and its scope, and music and its segments will never be used IMO for exactly the reasons you state.
But with major chords at specific points on a scale we can cover most of the frequency range while using actual musical notes without being as random/unspecific/unfocused, also the sound could be derived from a synthesised instrument to control its waveform.
In a way it is interesting because certain instruments can generate 40-50 harmonics between 20-20khz that are incredibly complex.
Relating to the IM distortion, not sure you read the link where I provided the article by Nelson Pass, I find it very good for giving a high level understanding of the various distortions and their implications in different topologies while also showing difference between both simple and complex IM distortion.
It is him who says:
If you want the peak distortion of the circuit of figure 13 to remain below .1% with a complex signal, then you need to reduce it by a factor of about 3000. 70 dB of feedback would do it, but that does seems like a lot.
By contrast, it appears that if you can make a single stage operate at .01% 2nd harmonic with a single tone without feedback, you could also achieve the .1% peak in the complex IM test.
I like to think the latter would sound better.

He is a supporter of the zero feedback architecture and makes some good points:
http://www.firstwatt.com/pdf/art_dist_fdbk.pdf

If really interested for the alternative view on negative feedback from one of its supporters I could try and find Bruno Putzey's presentation on the subject, think Walt Jung may also had touched on the subject as well.

Cheers
Orb
 
Last edited:
This is something I touched in quite a lot of detail but appreciate my posts may suck bah.
It is not that he has to deal with a 5th parameter but validate his 4 parameters.
Validation requires taking other perception description (so far we have tone and bass explained by Ethan and are known how they work on FR anyway), but look back at Jeff's article/opinion Frequency response is not the only measurement as this provides a few other descriptions we use to describe what we hear in music.
So validation requires taking his parameters, checking multiple products with same and different measurements associated to these parameters and correlate those to a listeners perception (where trained to use the specific descriptive words).
This is a pig of a setup (due to the blind/dbt test protocol and definition but look at those done in other scientific studies or at HK for a feel).
So, as I also mentioned earlier the alternative would be the 1st step that uses specific reviewers and correlate their words to those measurements associated to his 4 parameters.
This would have to involve various rated products in terms of reviewer perception and also various measurements so a trend and then a model can be generated.
However the sticking point is subjective perception in any tests and even reviewers for Ethan and that in his opinion they are flawed.
But without any correlation between those 4 parameters beyond bass/tone for what listeners describe they hear and how it is affected (trend and behaviour) by different measurements (good and bad) we just have speculation.

I hope this helps but if not looking back goes into this a bit more, just rushing/summarising now as we are in a reiteration cycle of earlier posts.
Thanks
Orb

Dear Orb: I agree. If Ethan continue arguing with out a specific answer we are waiting forthen IMHO he only is confirmed that he does not has that answer at all and nothing wrong with that: no one goes die for that.


Ethan already questioned everyone and when you ask one question till this moment he decide not give you a specific answer, that I know now we all are waiting for.

Ethan, do you have that answer? or we must close the " circle ".

Regrads and enjoy the music,
Raul.
 
Hi Raul.

Dear Ron: If you tell me that today is Saturday when I think/know is Friday then IMHO I have the right to ask you: please let me know why today is Saturday? and when you give me your answer then you can ask me whatever you want about that Friday but first give an answer.
Absolutely. No disagreement from me. The burden of proof is on the person making a positive assertion.

What if Ethan or any other person post: there are only three audio measurements that has audio validity. What could be your answer to that " absolute " statement?, think about.
This hypothetical is no different than your Friday or Saturday hypothetical. The burden of proof is on the person making the positive assertion.

From a scientific point of view, this is simple. Formulate a hypothesis, test it and analyze the result(s). If Ethan says today is Saturday, or more pertinent to this thread if he says there are only 4 parameters, then that is his hypothesis, now he must test it and analyze the results. If the results are consistent with his hypothesis, then he is free to claim what he has and certainly is on more solid ground than one who hypothesizes without any data to support it.

What the forum seems to be asking him is for his data, i.e., what evidence or tests did he run which in his view confirmed his hypothesis. I've read general discussion, with members offering other parameters and insofar as I can recall Ethan's response has been that these additional parameters are included as subparts or subparameters (if I may use that word) of his 4 parameters.

I have no issue with forum members asking for a more detailed (instead of general) discussion. Maybe his methodology was faulty. Or not. Maybe he misinterpreted the data. Or not. Maybe there were other tests he needed to run. Or not.

BUT. If a person wants to disagree with Ethan's position of 4 parameters and offer up a 5th, then that now becomes a positive assertion and the burden of proof falls squarely upon the shoulders of the person making this positive assertion. Under no circumstances is it Ethan's burden to prove there is no 5th parameter - to state otherwise is to commit the logical fallacy.
 
Hi Ron, it would help if you could mention what those subparts or subparameters are and how they fit into the context of the 4 parameters (because myself and others seem to be missing this or feel what has been said historically does not fit the context).

Thanks
Orb
 
I think core to the discussion is understanding that a measurement is just generated data resulting from a defined test protocol/procedure with the relevant tools and their setup/configuration (this is explained in more detail in previous posts of mine).
I'd like to think we're not in elementary school here. We don't need to state the obvious, do we?

BTW I cannot say what any other parameter would be, in same say Jeff states FR is not the only measurement in his article I linked before.
Also look back and you will say I cannot prove it so its just hypothesis (or speculation that is a better word as shown by microstrip) as much as Ethans.
No. Your position, at least as you state it here, is far, far different from microstrip. microstrip is asking for the data, as I described in my last post. Your position is, well, frankly I'm not even sure since you have no hypothesis.

Hope this does not annoy you but I am going to be very cheeky here :)
The use of "I Don't believe you have......" suggests that you have possibly made a judgement call and subconsciously communicated it.
I notice we tend to use "I believe" or "I don't believe" when we have not processed all the information that is mentally weighed up to arrive at a decision/judgement call.
I do the same myself, and if you look for it in forums its noticable how this mechanism kicks in, so take it as a bit of fun and cheek but keep an eye on how you and others use the believe context.
I laugh when I catch myself doing it, and it is always relating to what I describe above :)

When we tend to process everything presented, from my experience we either state it as an opinion or as experience/knowledge instead of believe (which we associate with the unknown or not enough information).
Stay OT, okay? Psychoanalysis over the internet is, well ...
 
just to add mike, those things you referred to (type of piano, drumhead etc) are not inherent in our system. Well, that was poorly stated. What I meant was, the ONLY place that info comes from is the recording.

When I view it from that angle, it helps (me at least) put ethans four parameters into perspective. And, in some manner, it all revolves around being as true to source as possible, the system should introduce as few departures from accurately reproducing the recording as it can.

So, we look at the native FR of the speakers, it's decay characteristics, what the room does, other sources of distortion in the chain. Of course the FR of the speakers do not say 'steinway or yamaha', but we can start to draw conclusions about how accurately it may reproduce the sound of a yamaha or steinway.

Well, that's my take anyway, and is part of the answer to the oft posed question 'how can we tell how it images'. Again, any cues of imaging (or hall ambience etc) can ONLY come from the recording, sop again we are back to how well the speaker can accurately reproduce the recording. (I know I always said speaker, just my thing, others can insert what component they think is important)
 
Mike, check your email. :p

I really want to give a response to your post but I got a call while I was typing and I gotta stop by a friends place. Do me a favor, think about it not only from the standpoint of reproduction, but from recording. How would you compare 2 recordings of skin on a drum head, rosin on the bow? The same elements would be described in comparing the reproduction. At the end of the day measuring speakers is just recording the reproduction, rather than the original, and comparing it to expected output. From a measurement standpoint you don't really do things like what you described to benchmark a system.

Anyways, my friend here so I have to go, if I remember when I get back I'll go into more detail.
 
Last edited:
...I think core to the discussion is understanding that a measurement is just generated data resulting from a defined test protocol/procedure with the relevant tools and their setup/configuration (this is explained in more detail in previous posts of mine)...

Really, this is very wrong. A measurement is really a utilization of the scientific method. A measurement involves an intent, a hypothesis, assumptions (known and unknown) and appropriate procedures for testing them, observables (which you refer to as data) and subsequent evaluation. The observables must be subjected to skeptical scruitiny and their consistency with the hypothesis and appropriateness to the intent evaluated.

Simple example based on real life: The fuse in your amp keeps blowing. You intend to fix the problem and in this context want to diagnose the cause. You hypothesize that there is too much voltage coming in on the line which causes overcurrent in the amp. You take a voltmeter off the shelf and observe the reading. It reads 120V so you conclude overvoltage is not the problem. But you are wrong because you chose a voltmeter that was sensitive only to 60 Hz. This inadvertent assumption caused inappropriate observations since there are harmonics on the line and this is what is causing the overvoltage. You only measure the true and relevant voltage when you refine your hypotheses and re-examine you assumptions.

In my experience, there is no such measurement as simply "taking data" unless what you are doing is very routine.

Actually, in real life the "fuse" in question was a $1M tranformer at an electrical substation which kept blowing. The braniacs in charge swore on a stack of bibles that they were measuring the voltage and current and no way, no how was it out of spec. But, downstream there was a large industrial induction furnace with molten, conducting metal molten bouncing around in it and feeding back enough current to routinely fry the transformer.
 
I'd like to think we're not in elementary school here. We don't need to state the obvious, do we?


No. Your position, at least as you state it here, is far, far different from microstrip. microstrip is asking for the data, as I described in my last post. Your position is, well, frankly I'm not even sure since you have no hypothesis.


Stay OT, okay? Psychoanalysis over the internet is, well ...

Sorry Ron but we are going in circles, I HAVE set out asking for data and tried to explain in a summary not long ago about correlation and what needs to be done; both microstrip and Raul seem to understand what I am saying so I am going to drop out as it seems to me this is now taking a turn into an argument.
Thanks
Orb
 
Really, this is very wrong. A measurement is really a utilization of the scientific method. A measurement involves an intent, a hypothesis, assumptions (known and unknown) and appropriate procedures for testing them, observables (which you refer to as data) and subsequent evaluation. The observables must be subjected to skeptical scruitiny and their consistency with the hypothesis and appropriateness to the intent evaluated.

Hi Smokester, well I can only go by my years of engineering and being required to do some pretty detailed testing (not audio) that were not routine.

We can ague semantics but the point is to get across for general posters who consider measurements do not show us everything/measurements show us all, is the following;
1. Measurements involve a defined (includes scope) test procedure-process (you even state this on line 2)
2. The defined tests can also involve a tool to generate-trigger a specific response-behaviour-trait
3. Tests also involve a tool/probe (if you want a measuring instrument) to measure the parameter with quantative value (unit of measurement), in other words a source of data or even data acquisition.

I do see one fundamental mistake I made though, that was posting as if there is always a tool to generate-trigger a specific response, it is also quite possible for only passive analysing-measuring and depends upon the circumstances.
What I am suggesting and have been does fit in with various articles including presentations done by engineers at IEEE conferences, and also measurement tools related companies, and to a much lesser extent what John Atkinson or Paul Miller explain at times about testing audio equipment.

That is my take on it and hopefully helps other posters in general, with that I feel its best if I let this drop from now on though due to a possibility of the reiteration cycle happening that becomes ever more confusing with the original intent and context lost.

Cheers
Orb
 
Orb

Not meant to be ironic but what's your take on meaurements then?
 
Orb

Not meant to be ironic but what's your take on meaurements then?

Hehehe ironic is cool so NP anyway.
I would say go back to the beginning of how the last zillion pages went; specifically post no 417 and mine-Ethan-some other members initial posts from there,where it then expanded.
Edit:
Just to add and I only touched on it once is my opinion that time-interval resolution is also key in many cases; I touched briefly on the simple frequency response vs spectral decay waterfall plot and how both do not necessarily reflect the same information (take speakers); cabinet resonance and driver breakup can show quite strongly with specific behaviour while still having a low distortion at 90db @1m on a spectral delay waterfall plot.
Those two speakers can have exactly same distortion and yet their waterfall plot can differ quite dramatically showing differences.
This is just a simple example but it is applicable in many other instances, including the analysis of partials-harmonics relating to instruments/sounds or harmonic distortion pattern-trend that some may had not considered as well (think Olsen did a study on harmonic distortion patterns as it occurs both in the time domain and frequency for certain amp circuit designs).

Cheers
Orb
 
Last edited:
Your 4 parameters are not necessarily the only ones involved with sound reproduction (we put both our points across so no need for me to reiterate them).

What else is there? I do not recall you or anyone else showing anything beyond those four. I'm sure I've asked at least half a dozen times so far in this thread alone.

By using 2 of your own parameters I have been able to argue the case IMO (and you now seem to agree with) that existing test procedures and their measurements may not be enough

I never agreed that measuring the four standard parameters is not enough. Reading more of Nelson Pass's article now, I don't see anything that disagrees with what I've said, or supports the notion that there are are than four parameters. Yes, when you measure IMD for two tones, the total distortion may be lower than with more than two tones. So what? The obvious solution is to over-spec a purchase and insist that a device have less IM distortion than you'd have otherwise considered. So instead of assuming 0.1 percent is acceptable, buy only amps having 0.01 percent or less. None of this changes the basic premise that everything affecting audio fidelity is known and can be measured using current knowledge.

if the test procedure's scope and definition is not enough how do we prove with existing data that the only affects to audio reproduction (those descriptions mentioned by Jeff and a selective few more) are your 4 parameters.

I honestly don't get your point here. If you believe there are more than four parameters, just tell us what they are. Don't beat around the bush. But if you can't explain what more there might be, I have to ask why you're so adamant that there are more than just those four.

--Ethan
 
No one should be asked to prove a negative - but you are asked to prove a positive. You already suggested that you have experimental evidence of it, even based on listening tests, but you did not supply any verifiable details on it.

Please tell me specifically what "positive" you are asking for and I'll do my best to comply.

--Ethan
 
Ethan has set more than that - he asserted values for at less two of these parameters - .1db 20-20kHz and -80db distortion for inaudibility. Unhappily we still do not have absolute values for them - these are relative values.

In this case, relative values are the correct way to express the data. It's correct for distortion and noise because of the masking effect, and it's correct for frequency response because, well, because that's how we hear! Absolute values are also available for the audibility of time-based errors such as wow and flutter. So we do in fact have absolute values for all of this, just that the "absolute" value for some things is relative to the signal.

Although not directly related consider the effect of dither noise in digital recorders. It affects only the LSB in 16 bit and it was proved to be audible.

Dither is directly related, but the way it's presented above is incorrect. Yes, it's possible to hear things at the 16th bit, but not if all 15 other bits are active! If you have a musical passage whose level is around -40, and you turn up the volume unnaturally loud so you'd blow out your speakers when a normal level comes along, you might be able to hear the effect of dither. But you can't hear it on music that's near full scale when played at non-damaging volume levels.

If you believe you can hear the effect of dither on normal music, please read my Dither Report and tell us which files are dithered and which are truncated. Or email me your guesses and I'll post them here.

--Ethan
 
This is something I touched in quite a lot of detail but appreciate my posts may suck bah.
It is not that he has to deal with a 5th parameter but validate his 4 parameters.
Validation requires taking other perception description (so far we have tone and bass explained by Ethan and are known how they work on FR anyway), but look back at Jeff's article/opinion Frequency response is not the only measurement as this provides a few other descriptions we use to describe what we hear in music.
So validation requires taking his parameters, checking multiple products with same and different measurements associated to these parameters and correlate those to a listeners perception (where trained to use the specific descriptive words).
This is a pig of a setup (due to the blind/dbt test protocol and definition but look at those done in other scientific studies or at HK for a feel).
So, as I also mentioned earlier the alternative would be the 1st step that uses specific reviewers and correlate their words to those measurements associated to his 4 parameters.
This would have to involve various rated products in terms of reviewer perception and also various measurements so a trend and then a model can be generated.
However the sticking point is subjective perception in any tests and even reviewers for Ethan and that in his opinion they are flawed.
But without any correlation between those 4 parameters beyond bass/tone for what listeners describe they hear and how it is affected (trend and behaviour) by different measurements (good and bad) we just have speculation.

I hope this helps but if not looking back goes into this a bit more, just rushing/summarising now as we are in a reiteration cycle of earlier posts.
Thanks
Orb

None of that is needed. All you have to do to make your point is list more parameter categories than the four I have stated.

--Ethan
 
If Ethan continue arguing with out a specific answer we are waiting forthen IMHO he only is confirmed that he does not has that answer at all and nothing wrong with that: no one goes die for that. Ethan already questioned everyone and when you ask one question till this moment he decide not give you a specific answer, that I know now we all are waiting for. Ethan, do you have that answer?

If a person wants to disagree with Ethan's position of 4 parameters and offer up a 5th, then that now becomes a positive assertion and the burden of proof falls squarely upon the shoulders of the person making this positive assertion. Under no circumstances is it Ethan's burden to prove there is no 5th parameter - to state otherwise is to commit the logical fallacy.

I hope listing these two paragraphs adjacent shows the illogic in some of these posts. If you ask me something and I answer, please do not accuse me 20 posts later of not having answered. It's futile - and even a bit depressing - when basic logic is ignored. A thread that could have ended successfully many posts back with all in agreement is still going on, and I'm still answering the same questions, and explaining the same logical fallacies, repeatedly. :confused:

--Ethan
 
Hi Ron, it would help if you could mention what those subparts or subparameters are and how they fit into the context of the 4 parameters (because myself and others seem to be missing this or feel what has been said historically does not fit the context).

I've already linked in this thread to my article and video explaining the four parameters and their subsets in detail:

Audiophoolery
AES Audio Myths Workshop


--Ethan
 
Please tell me specifically what "positive" you are asking for and I'll do my best to comply.

--Ethan

Quoting you:

There are four parameters that affect audio reproduction:

Frequency response
Distortion
Noise
Time-based errors

Of course, there are subsets, such as hum and buzz and LP crackles under noise.
(...)

The four parameters I listed above indeed tell everything needed about an amplifier circuit. Amplifiers are much simpler than loudspeakers! So for an amplifier all that's needed is frequency response, distortion, and noise. In this case I'll put ringing under frequency response.
(...)

NOW YOUR POSITIVE STATEMENT
Regardless, if two amplifiers have a response flat to within 0.1 dB from 20 Hz to 20 KHz, and the sum of all distortion is at least 80 dB down, then both amps will be audibly transparent and thus sound the same when auditioned properly (level-matched and blind).

COULD YOU PLEASE NOMINATE THE TWO AMPLIFIERS YOU HAVE USED TO CARRY THIS EXPERIMENT?
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing