Creating a Audio Dartboard

@Steve

Why yes, but it is not as bad as one might imagine from a practical stand point.

@Ron

Of course not. Neither does Dr. Olive dismiss the backgrounds of his test subjects either. In fact it IS extremely relevant when he mines his data.

Now suppose for a minite that you are a manufacturer and that if you ARE targeting folks of a particular aesthetic bias then that bias is a plus not a minus. We forget these are businesses not academic institutions. Their reason for being is to make us happy, not enlighten us. Leave that to the Universtities. It would be a mistake to expect an artisanal company trying to find a niche to keep his doors open to stop trying to zero in on his client base's hot buttons. Their specific "biases". Will the sound be absolutely neutral? Heck no. Will it be closer to the "truth". Nope. Will it be closer to perfection? Well for that biased subset in that point in time, YES. So maybe a microscopic step in the direction of unlocking the mysteries of reproduced music but a heck of a step in making customers happy. Now that would be me or you so I'm all for it. Bring on the bias! :)

Somewhere out there is a kid in a garage that will change how the rest of the world goes about their business someday because he is thinking way outside the box. A mindset of dogmatism and strict conformity stifles creativity and that, in my mind at least, hinders progress. Paradigms are busted from the outside not from within. We need both approaches as one validates or invalidates the findings of the other.

Now flame me for this if you like but I believe it and am sticking to my guns on this. Everything is biased because everyone is biased. The moment you ask the question "WHY?", THE shortest yet most profound question in the history of our species, a hypothesis follows. One that is to be proved or disproved. I refuse to believe that he who forms the hypothesis is not biased one way or the other. For some personal reason, he wants to prove it or disprove it. If he didn't care, then he wouldn't even have asked the question. To prove this point, extensive methodologies are developed precisely to SAFEGUARD results from the researcher's biases NOT to safeguard the researcher's sanity from unexpected results :) .

So I throw the question back. Given there is no perfect component out there, why did any of you choose what you have now? Did any of us blind fold ourselves in the dealer's shop? More importantly, should we? How important is actively participating in a DBT shoot out to you when you select a component for your own use?

I'll give my answer now. It means Squat to me.
 
So I throw the question back. Given there is no perfect component out there, why did any of you choose what you have now? Did any of us blind fold ourselves in the dealer's shop? More importantly, should we? How important is actively participating in a DBT shoot out to you when you select a component for your own use?
I am sure I am the rare exception :). But yes, I did select just about every component I care about using blind testing. I went as far as dragging my current gear to the dealer, had him switch things back and forth for me while I listened!

But you do make a good point. The thousands of people who talk about DBT being gospel, never, ever do their own double blind tests or have never participated in a single such test. Even when I present simple things for them to try, they always refuse.

I'll give my answer now. It means Squat to me.
It would be if as a result of these discussions, someone, someplace, decides to try the other guy's scheme :). That is the only way we can truly learn the other guy's argument.
 
Thing is Amir, I have. Speakers, CD players, cables even. There have been times under tests I could tell the difference and others when the difference was night and day. Like I said however these are GROSS differences. I'm after nuances at this point. DBT is squat to me because too many times after extended evaluations I've been able to zero in on important nuances between products that when barreling through blindly I heard no gross differences jusy "insignificant" ones perhaps. That going for an extensive evaluation saved me quite a bit of money is gravy.


Betting or wagering of any parts of your anatomy are strictly prohibited.;)

:eek::eek::eek::eek::eek::eek: It does show confidence though doesn't it? :D:D:D:D:D:D
 
Last edited:
Let me add something before someone jumps at me :). I own a ton of A/V gear that I also use which I did not critically select but also don't rely on critically if that makes sense :). The most critical gear I have is for analysis of audio performance and that is where I went to N'th level to be sure that the gear was reliable beyond my own personal bias.
 
Thanks for the thoughtful reply. If you're interested, let's keep the conversation going and maybe Dr. Olive will join in, seeing as though his integrity is being called into question.

@Ron

Of course not. Neither does Dr. Olive dismiss the backgrounds of his test subjects either. In fact it IS extremely relevant when he mines his data.

Perhaps I'm mistaken, but as I read your post you were not scrutinizing the bias of his test subjects but, instead, you were at a minimum implying Dr. Olive himself is biased and/or has a conflict of interest because he is employed by a manufacturer of certain audio products.

Putting that aside for the moment, if no one, trained or untrained, designer or end user, is immune from the numerous biases, and they are numerous, then I believe the next question is: how do we attempt to eliminate these biases?

It is late, but I thought I would address one other point before signing off for the eve:

Now suppose for a minite that you are a manufacturer and that if you ARE targeting folks of a particular aesthetic bias then that bias is a plus not a minus. We forget these are businesses not academic institutions. Their reason for being is to make us happy, not enlighten us. Leave that to the Universtities. It would be a mistake to expect an artisanal company trying to find a niche to keep his doors open to stop trying to zero in on his client base's hot buttons.

My first response is that the reason for being in business, first and foremost, is to make a profit. Beyond that, though, I see no reason why making a profit, making consumers happy, and enlightening us consumers necessarily are mutually exclusive goals. There certainly are countless examples of this.

Of course, it is entirely possible that a manufacturer could skew the design and implementation of a blind test, just as it is possible that a manufacturer could skew the design and implementation of a sighted test. Both principles hold equally true for university run tests as well.

And as Amir has pointed out, blind testing in the hands of the wrong test designer/administrator is quite unreliable.

So do we throw the baby out with the bath water?

Floyd Toole, Sean Olive, Paul Barton, and numerous others, all currently or formers employed by manufacturers, also present before the AES and are subject to peer review. If there is any flaw in the design or implementation of their tests, there is ample opportunity to expose the same. As scientists they may go back and conduct further studies. This is the very nature of the scientific method.

Finally, by definition sighted tests are unreliable, given that no one is immune from bias.

So, again, the question remains, how do we attempt to eliminate the numerous biases?
 
I think Jack makes a good point that the biggest challenge to double-blind testing typically is if the difference is exceedingly small. Our ears are good instruments but they do have difficulty remembering things. Hard proponents of DBT declare such results as a draw and conclusion that no difference exists. I have learned to try to optimize the differences by finding the perfect material, the perfect setup, together with lots of training to make it the easies for the ear to detect differences. Still, I am not sure if I have managed to get perfect.

And Ron states the issue with sighted tests which is much higher frequency of mis-voting due to bias. Despite my discipline, I have fallen victim to this a number of times and embarrassingly so.
 
Thanks for the thoughtful reply. If you're interested, let's keep the conversation going and maybe Dr. Olive will join in, seeing as though his integrity is being called into question.



Perhaps I'm mistaken, but as I read your post you were not scrutinizing the bias of his test subjects but, instead, you were at a minimum implying Dr. Olive himself is biased and/or has a conflict of interest because he is employed by a manufacturer of certain audio products.

I made no such insinuation Ron. I take exception to that. To answer the first part I simply stated that the background of his test subjects is very important. He did select students from a music school in an example from another thread. He also groups respondents according to different criteria age, sex, etc. This is normal and if he DID NOT then perhaps one could say he was not doing his job. What YOU seemed to be leading to was that a trained or screened panel was useless anyway since THEY have biases.

Like I said. Don't we all? That is not meant to be disparaging or call into question one's integrity. Again like I said the methodologies are designed to protect the researcher from his own biases. Sean Included.

Besides, I'm not talking about purposely skewing test results. I'm talking about focusing tests on specific performance objectives. Those that focus on existing target market biases. There is a world of difference especially since the latter is in no way unethical. Hence my example of a RSS and an FGD.

As to removing the bias. In the case of DBTs Dr. Olive is removes whatever bias he has because he has set it up so he is NOT one of the respondents. He has passed it on to the selected sample. Thus we come to the crux of the matter and why it is squat to me as a consumer:

If I were to pass the selection of my equipment to a bunch of strangers under blind test who have absolutely no idea of what I like and what I don't like, what do you think the likelihood is that they will end up choosing something I like? I just don't get it. We thumb our noses down at reviewer apostles because they buy on the strength of someone else's recommendations rather than their own trust in themselves but you expect us to put these very personal decisions in the hands of others under the cloak of a scientific procedure? I am a consumer. It's my choice and no one else's. If I were a builder looking for a range of acceptable performance for as wide or narrow a base as possible then yes I would use DBTs especially in the product prototype assessment phase but I am not trying to please the folks in any given range, I'm pleasing myself because it is I who will be using this by myself 80% of the time.

I do not question DBTs per se. I question its utility from a consumer stand point.

By the way, I answered you r question as best I could. It seems you have forgotten to answer mine. Amir did. You don't have to but it would help me see better where you are coming from.

"Given there is no perfect component out there, why did any of you choose what you have now? Did any of us blind fold ourselves in the dealer's shop? More importantly, should we? How important is actively participating in a DBT shoot out to you when you select a component for your own use?"
 
Last edited:
I thought I'd toweled off but into the pool I go again!

DBTs are useful but I really believe they are rough tests. Very rough tests. They point out GROSS differences.

That is certainly not the case if the listeners are carefully selected and trained based on their hearing and ability to discriminate sound quality differences in a consistent fashion. We've developed listener training software to teach listeners how to identify and rate different types of distortions in audio, which gives us performance metrics on how discriminating and consistent they are. Each listener has an overall Listening IQ score that tells us who the best listeners are. We also statistically analyze and monitor their performance in actual product evaluations, and if they start to slip, we check their hearing and send them back for more training.

There is room for both but in my opinion the importance of any ABX/DBT on the finalization of a product pales in comparison to that of evaluations done by a trained panel.

I agree, and that is exactly what we do. We use expert listeners to evaluate the prototypes. My experience so far indicates that if the experts approve of the product in terms of its sound quality, the untrained listeners will love it (check out the graph comparing loudspeaker preferences of trained versus untrained listeners.

Take Harman for example. What good would it to for say Kevin to do a no-holds barred assault on loudspeaker design meant to cater to the most discerning Harman clients then change the design based on the results of a DBT using folks off the street? It would be fine for an entry level JBL I'm sure but a Halo product? I think not. I'll bet my left nut that in such a situation a panel had been assembled and employed and that DBT respondents would be carefully selected.

You're exactly correct!
 
Last edited:
does that introduce bias by selecting the type of panel assembled

That is one of the concerns I've had in the past. Perhaps the training has biased the expert panel or they have become biased towards "neutral" "accurate" sound that perhaps certain market segments don't prefer.

So I've brought in different groups of listeners from outside Harman who aren't trained and run them through the same tests. Lo and behold, these untrained naive listeners pick the same loudspeakers those preferred by the trained panel. The difference is that the ratings from the trained listeners tend to be higher overall on the preference scale, and there is more noise/inconsistency in the ratings.

To achieve the same statistical confidence you need about 100-300 consumers compared to 12 trained listeners. A 300 panel of consumers can cost up to $100k. If you can extrapolate the ratings of the trained listeners to the market segments you save a lot of time and money.

Not all companies believe that trained listeners have the same preferences as untrained listeners, Bang and Olfusen being one of them. Borrowing techniques from the food/beverage industry, they have an expert panel that does only Quantitative Descriptive Analysis (QDA) on the product (not preference testing), determining what attributes define the sound of the product (how bright/dull, full/thin, distorted/undistorted enveloping/uneveloping,etc), and then rate each attributes' intensity on an absolute scale they've defined.

They then use a 300-500 sample of their targeted consumer to determine what their preferences in sound quality are. The expert panel then figures out what the intensities of those preferred attributes are to ensure future products have that magic combination of sound ingredients.

I don't believe QDA is necessary for evaluation of audio components if your philosophy as an audio manufacturer is to accurately reproduce the art (the recording or musical event) as closely as possible; If consumers like boomy bass and tizzy highs for example, and that wasn't in the original recording, you are then deviating from what the artist intended, and editorializing his or her art. That is something we do not subscribe to.

Moreover, my tests with naive listeners so far indicate they tend for prefer the most accurate loudspeakers much like the trained listening panel.
 
So I've brought in different groups of listeners from outside Harman who aren't trained and run them through the same tests. Lo and behold, these untrained naive listeners pick the same loudspeakers those preferred by the trained panel. The difference is that the ratings from the trained listeners tend to be higher overall on the preference scale, and there is more noise/inconsistency in the ratings.

To achieve the same statistical confidence you need about 100-300 consumers compared to 12 trained listeners. A 300 panel of consumers can cost up to $100k. If you can extrapolate the ratings of the trained listeners to the market segments you save a lot of time and money.
That was our experience at Microsoft also. Here is what we found: Expert listeners managed to outdo most listeners including majority of audiophiles. They could more consistently pick the control (i.e. saying there was no difference when there wasn't) and better hear differences when there were any. Only a small percentage of the general testers (which again included audiophiles) could match them. I applauded the few people who were "gifted" to have the same level of hearing ability without any of the training of the experts.

Assembling larger groups also has another cost: recruiting and testing time is lengthened substantially and that can have a higher cost in a commercial application than the cost of the test itself!

Not all companies believe that trained listeners have the same preferences as untrained listeners, Bang and Olfusen being one of them...
I would add other mid-level Japanese companies to that list when building low-end products. They would for example, test a boombox in their target market (college kids in a dorm) at large scale and even had targeted sound curves for different markets.

I don't believe QDA is necessary for evaluation of audio components if your philosophy as an audio manufacturer is to accurately reproduce the art (the recording or musical event) as closely as possible...
Agree and the only counterexample I have is for video. With VC-1, we all had a preference to have sharper images than soft and blurry. Unfortunately at lower bit rates and extremely high compression artifacts, good number of people would take soft and blurry over a sharper picture with slightly more artifacts. So we lost fair number of benchmarks against MPEG-4 AVC at Internet rates. Fortunately, the very factor helped us win a decisive victory in codecs for High Definition video (HD DVD and BD) where quality did matter :). At the end, we made the scheme adaptive but still favored sharpness which cost us some business in the general Internet.

Wonder if the same is true to some extent in speaker design. Surely a slight bump in mid-low frequencies gives a level of warmness that some people associate with good speakers in smaller sizes at least. No?
 
Jack

In the absolute ,we aren't sure of anything ... We do however have a frame of reference . We use Logic and if we were not to agree on it .. End of Discussion.. End of a lot of things. So it is our Frame of Reference.
Nowhere but in High End Audio have I seen this tendency toward questioning Science for , after all, an endeavor which relies son science and technology as much as Music reproduction .. Science is at the very basis of our Audio system, yet we would like to refute it the second it infirm some of our observations.
I have often reminded of an anecdote about Einstein boiling his eggs, he most likely thought of it as the rest of us .. 10 minutes hard 2 mins soft ... Same for us ...I for one didn't choose any of my components in BT.. I choose based on how they sounded to me ... When a cable cost the same as an apt in NYC (not yet but we are getting there :) ) and the manufacturer claims "Quantum Tunneling" our perceptions and the claims must be tested and thoroughly. We should open ourselves to the real notion that our senses are not objective and that they can be fooled but also trained toward objectivity, a separate discussion by the way ...
So I construct your argument, no offense intended, as nihilist in its essence .. We certainly don't know it all but we are able to know more to move forward and through the judicious use of science and when it is lacking , to use our properly trained senses to arrive at better audio systems ...

Frantz
 
Jack

In the absolute ,we aren't sure of anything ... We do however have a frame of reference . We use Logic and if we were not to agree on it .. End of Discussion.. End of a lot of things. So it is our Frame of Reference.
Nowhere but in High End Audio have I seen this tendency toward questioning Science for , after all, an endeavor which relies son science and technology as much as Music reproduction .. Science is at the very basis of our Audio system, yet we would like to refute it the second it infirm some of our observations.
I have often reminded of an anecdote about Einstein boiling his eggs, he most likely thought of it as the rest of us .. 10 minutes hard 2 mins soft ... Same for us ...I for one didn't choose any of my components in BT.. I choose based on how they sounded to me ... When a cable cost the same as an apt in NYC (not yet but we are getting there :) ) and the manufacturer claims "Quantum Tunneling" our perceptions and the claims must be tested and thoroughly. We should open ourselves to the real notion that our senses are not objective and that they can be fooled but also trained toward objectivity, a separate discussion by the way ...
So I construct your argument, no offense intended, as nihilist in its essence .. We certainly don't know it all but we are able to know more to move forward and through the judicious use of science and when it is lacking , to use our properly trained senses to arrive at better audio systems ...

Frantz

Actually that's not totally correct. We always question the science of athletic training constantly; our ultimate endpoint is performance and I don't care if you are the most book read coach, if your athletes don't win, then you're fired. If you go and read the Russian coaches books and journals, one finds that science guided their athlete's training but wasn't the end all; the Russians always constantly questioned whether their science (and what they were measuring, say in this case biomechanics) improved their on the field performance. If it didn't, the Russian coaches went back to the drawing board and rethought their theories. That's what brought along the present day training methodologies used with Olympic, professional, college and HS athletes.

Athletic training represents a careful blend of science and experience learned over the years- in large part like audio hearing, because of the interindividual differences. In fact, individuality, along with specificity, overload and accomodation, all factors relating to stressing the body, is one of the tenets for training.

Science has its limitations no better characterized by we can only measure what we know. Or our measurement capability is limited (esp until CERN) such as in the study of subatomic particles. Take for instance adipose cells. Until recently, they were viewed as a nuisance; today they've been reclassified as essential part of our endocrine system since they secrete at least 13 different hormones essential to our metabolism. Why did that happen? Because we developed new tools to measure and detect new hormones. Endocrinologist weren't satisfied with the status quo since they were making observations that didn't match the outcomes. The same goes with detecting many polypeptides since their half lifes are very short making measurement extremely difficult. Or the finding of more and more neuropeptides since their detection is very difficult.

The problem with many of the so-called "pseudo audio scientists" is that they believe we have all the measurements we need. OTOH, we have people like Keith Johnson constantly questioning why our hearing is still superior to measurements and thus trying to develop new ways of looking at new tests to mimic how our ear and BRAIN perceives sound.

This field reminds me much of muscle heads in the gym who read Muscle and Fiction and believe everything they read. Everything is about muscle (insert ear hear). But in fact, everything as the Russians taught us begins with the CNS sending movement patterns to the muscle telling the body what do do. The brain doesn't know a bicep from a quadricep. But what the brain knows is the movement patterns developed over the course of our lives and arranges the variables needed (the brain doesn't have the capacity to store the total movement pattern but inviolate parts like dynamics, etc) to complete the movement. So that's why the more complex the task we ask our body to complete, the more "motor learning" is required. And to be honest, there's not a lot of difference between math, motor, art, music skills when it comes to learning. And in fact, some people think cognition and movement are one and the same.

In audio, all we hear about is the ear. Very few talk about the brain-Woody Allen's second favorite organ-and not only how male and female brains are different as a result of different hormonal influences, but all of our brains our different. And how does that effect our hearing - and perception? And how the two times in our lives (the third trimester and age 12) where the brain prunes those connections that aren't used and strengthens those that we do (that's why it for example becomes harder to learn a language when we're older). Or those studies a while back that indicated that perfect pitch was related to the age at which children were exposed to music (that's not any different from movement where we have say between ages 8-9 that's important to develop explosive power, age 5-6 for movement patterns, etc).

And do you think that Einstein sat back and was satisfied with the theory of general relativity? That was just a small part of his big plan-and I'm sure some scientists thought his theories were out there and where was his proof?
 
Last edited:
Myles

Again we go around. we don't know it all. We are learning, we will be wrong at times and right other times. Questioning science is the essence of knowledge. We must question science and I have no problem with that. We must however remain consistent in our questioning. I have no doubt that science is not all knowing and encompassing but its current methods have proven their worth and their usefulness. We can't invoke science to fit an argument and reject when it doesn't ...
If the phenomenon is physical, it should replicable. When replicability is suspect or not possible and the only variable in an experiment is knowledge then Logic point toward it as the suspect .. here knowledge of the component or of something about the component ... The removal of this knowledge is important to arrive at better and more objective conclusions .. Not the use of adverbs , I don't claim absolute objectivity of the method ... it is MORE objective. It is not that we know it all but the scientific method has proven it allows us to progress to know better and more and it lies on objectivity and determinism..
back to High End Audio .
I am by no means professing that all components sound the same. I would not even participate in that board if it were the case and beside the contrary is easily verifiable with what is decried by many of us, Audiophile, namely Blind Testing. Under these decried and truly humbling conditions some components to use a colloquial expression don't cut the mustard ... cables in particular.
To conclude whatever we hear is measurable. It can be that sometimes we are not measuring the right things or even don't know what to measure. It doesn't mean we CANNOT measure it or that we shouldn't persevere and find a way to measure it or these ..

Frantz
 
Last edited:
Myles

Again we go around. The point is as simple as this: Why when some knowledge is removed our "perceptions" change? THis bias needs to be removed and that is scientific .. Science doesn't profess to know everything , but it aspires to know continuously and it will be worng some times .. Adipose cells

Actually science is probably about as correct as weather forecasters :( One learns pretty quickly when researching a field, just how many times theory has changes. Take cholesterol for instance. For years, MDs and science accepted that it was the end all til someone found that people with low cholesterol levels were still experiencing myocardial infarctions. Then it was triglycerides, then HDL/LDL and now C-reactive protein.

Again what separates the "real" scientists from the pseudo scientists is asking the right questions and as John Wooden said best, "It's what you learn after you it all that matters the most.
 
Myles, reading your post it seems that you believe in some amount of science in your field of work when it comes to measuring results. Do you hold the same view in audio testing that blind testing can generate useful result at least some of the time?
 
Myles

Again we go around. The point is as simple as this: Why when some knowledge is removed our "perceptions" change? THis bias needs to be removed and that is scientific .. Science doesn't profess to know everything , but it aspires to know continuously and it will be worng some times .. Adipose cells

We are not infallible and science is not infallible.

Because they are perceptions and subject to interindividual differences. And how we perceive things is not cast in stone; perception is greatly affected by extrinsic and intrinsic factors.
 
Let me address these questions you raise.

To answer the first part I simply stated that the background of his test subjects is very important. He did select students from a music school in an example from another thread. He also groups respondents according to different criteria age, sex, etc. This is normal and if he DID NOT then perhaps one could say he was not doing his job. What YOU seemed to be leading to was that a trained or screened panel was useless anyway since THEY have biases.

We have that information about the High School students, but I didn't bother to include it in my blog posting. If you carefully read the slide show, it says the students completed a pre-survey. Those results will be included when I write this up as a proper scientific paper. You have to remember that this group of students we not recruited, but came as volunteers on a class field trip. If we are interested in the attitudes and response of a particular market segment, then they would be carefully recruited and screened based on the salient demographic information.

Thus we come to the crux of the matter and why it is squat to me as a consumer:

If I were to pass the selection of my equipment to a bunch of strangers under blind test who have absolutely no idea of what I like and what I don't like, what do you think the likelihood is that they will end up choosing something I like? I just don't get it. We thumb our noses down at reviewer apostles because they buy on the strength of someone else's recommendations rather than their own trust in themselves but you expect us to put these very personal decisions in the hands of others under the cloak of a scientific procedure? I am a consumer. It's my choice and no one else's. If I were a builder looking for a range of acceptable performance for as wide or narrow a base as possible then yes I would use DBTs especially in the product prototype assessment phase but I am not trying to please the folks in any given range, I'm pleasing myself because it is I who will be using this by myself 80% of the time
.

We use double-blind tests mostly as a tool for product development. The acoustic design of a new product is already 80-90% along before we do a DBT based on objective measurements. I've developed a model that analyzes the loudspeaker measurements and can predict the outcome of the listening tests with about 86% accuracy.

We also use the tests as a competitive benchmarking tool telling us that the product sonically outperforms the targeted competition. We don't currently use that data very effectively for marketing\sales purposes but that might change.

We don't try to tell audiophiles\consumers that our tests guarantee they will like our products: that decision is ultimately up to them. Hopefully, at least they will have more confidence in our products when they compare our elaborate measurement facilities, testing methods, and scientific/engineering experts on staff compared to what the competitors have.

We already know that the audio reviewers are impressed when we show them our facilities and measurement capability (e.g. the room with the automated speaker shuffler). After seeing it they tell us they wish they could afford such elaborate tools for evaluating products.

So we know that we are doing some things right, at least.
 
Last edited:
Myles, reading your post it seems that you believe in some amount of science in your field of work when it comes to measuring results. Do you hold the same view in audio testing that blind testing can generate useful result at least some of the time?

I guess what I'm saying is that we can't accept anything blindly (NPI). A healthy dose of skepticism goes a long way here. But we can do our best to optimize say how a cyclist fits their bike and get the most power out of their legs/ankles.

And when we're dealing with athletic performance, we still don't have any real tests that will predict how say a college athlete will perform at the pro level. We know how many reps they can do in the bench press doesn't correlate with success. The 40 yd dash has little correlation since few sports are linear. Surprisingly, one measurement that does seem to correlate somewhat with atheticism is the vertical jump. Jumping is a very complex motor skill and it tells a lot about the athlete.

Then again, there are always some intangibles that we can't measure that relate to success.
 
Hi Sean,

As a long time Proceed and Mark Levinson owner, having gone through Series 2, 3, and 4 as well as owning Revel F30s in the past, I am very glad to get a chance to communicate with you.

I am doubly glad that I get to keep my left nut! :) I hope to learn even more from you in times to come!

Jack
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu