Audio Science: Does it explain everything about how something sounds?

Status
Not open for further replies.
Again , and i think i keep repeating myself so this is the last time :D , one can achieve a flat response with a natural papercomposite material which sounds natural .
And one that they use in the revel salon for example , something synthetic iirc , it still has a flat response but it doesnt sound natural, imo
In this case measurments dont say ****
another thing: one can achieve bassout put via multiple smaller drivers ( they are faster and what not ) or via big woofers , although they both might have acceptable bass output , big woofers exite the air more natural
 
How can that be the only explanation? Can I say that I want to take megadoses of vitamins to treat my cancer and the fact that science can't explain why that works, is the fault of current science? Surely there is another possibility that what you believe, is wrong.


Again, if you are going to argue with people who believe in science, then what you just said is non-sequitur. Don't engage us if all you have to offer is opinions that you can't back other than "other people believe it too." Many people believe in treatments for cancer that science doesn't recognize. That doesn't make their opinion right.


You are claiming that if something other than loudspeakers is changed, listener preference for a loudspeaker changes. And hence the outcome of the tests is not valid. That has nothing to do with what the loudspeaker "needs." You need to show that if you did the test with a different amplifier, the outcome would change. Do you have any such data or we are just dealing with speculation with no foundation?

Health, vitamins, cancer and medicine analogies are improper tools to any audio debate. I expected better from you. I am now completely out of this thread.
 
Because it would not change the outcome of orange soda against coke. Those two are so different in taste that the glass they are served in doesn't change the outcome. At least it is not plausible that it would. Likewise, we are discussing preferences for loudspeakers in a test that I personally participated in. The argument is that should they have changed amplifiers or whatever, that outcome would change. I want to hear some plausible argument that such a change would make the outcome of loudspeakers that sound so different by themselves different. You said you are a man of your experiences. I just shared my direct experience. Tell me why I should not believe it and instead believe you with no experience with said test.


Sorry, no. The test is what everyone does day in and day out. Listen to two loudspeakers and express which one they like better. The only difference was that other factors which have nothing to do with the sound as such make, model and looks of the loudspeakers are hidden. Tell me why I should not trust my personal experience in such a test. That I should like a speaker that sounds worse, better because it looks better. Is that what you are saying?

I am speaking for myself. I am not asking you to believe what I write. I am asking you to respect that others may not share your vision of audio science and that we have perfectly valid (not illogical) reasons for our opinions, based on our own individual experiences.

The Harmon tests work on preference and as such this is primarily marketing science, not what I would call audio science. Audio science would involve comparisons to see if differences can be perceived, for example in the case of speakers it would involve referencing live instruments vs. playback of those live instruments. (Hard to do.) This is much easier to do with some electronic comparisons, e.g. line level preamplifiers or ADC - DAC digital loops, software format conversion of recordings of jangling keys, etc...

Training is not necessarily the answer, either. If there are two types of distortion, A and B, and brand X products have type A distortion and competitive brand Y products have type B distortion, then training listeners to easily recognize and identify type A distortion may be a good marketing strategy for the owners of brand Y.
 
Health, vitamins, cancer and medicine analogies are improper tools to any audio debate. I expected better from you. I am now completely out of this thread.

That would be a shame, microstrip. You make some of the best observations/points in the entire thread. I understand how frustrating some of these exchanges are. I have been frustrated myself, but I am also learning some things from members like you and others who question the validity of some of the assertions. And I have a better understanding of what audio science can and can not explain.

I am also learning that most of the posters, perhaps also the membership at large, are open minded and bring their own excellent experiences to the discussion. They are willing to learn and accept or discuss reasonable points of view. The dogmatic positions at the two extremes are not held by many of the posting members, and I presume not much of the membership at large.
 
??? I have taken the test twice so what I shared was my experience which was identical to what the formal listening tests consisted of. You are not given any instructions on how to evaluate one loudspeaker against the other. It is up to you to do that however you choose. When you sit there, you are immediately presented with loudspeakers which tonally sound hugely different. It is that difference that you wind up judging...

You hear all the samples being tested, say 3 loudspeakers, and you give them score 1 to 10. No different than if you walked into a showroom and were choosing among them. There is no fourth loudspeaker that is the reference.

Discriminating between two variables to identify difference is vastly different from discriminating between two variables in which we are required to apply judgement (i.e. preference).

The former requires the removal of bias, while the latter is dependent upon it. Kahneman & Tversky; Keltner & Lowenstien; Loewenstien & Lerner; Bachara, Damsaio & Damsio have all published in relation to the primacy of emotion on our ability to make decisions requiring judgement.

Discerning a difference does not require any emotional facility. In fact, as ABX testing has repeatedly shown, emotion is an inhibitor to discrimination, and the purpose of ABX testing is simply to establish a statistically significant level of confidence in discrimination - not, it should be noted, an empirically significant one in which the subject’s intentionality is factored in (i.e. they could be guessing/lying and pass).

Bachara, Damasio & Damasio have shown that subjects with deficiencies in their pre-frontal cortex (the area of the brain that processes emotion) have difficulty making decisions requiring judgement, even down to simple tasks such as what clothes to dress in. (This was of particular interest to me as I spent two years as the co-ordinator of a youth/young adult drug rehab where delaying gratification and the ability to discern future consequences was imperative to their chances of long term recovery.)

What’s more, Bachara et al believe the amygdala (the source of processing memory, decision making and emotion) is using emotion to modulate memory and bias decisions, of which there is no separation between the two functions.

Therefore, any test designed to suppress emotional response from the subject will lead not only to an impairment in judgement (biases) but an impairment in memory access/storage.

This is problematic in light of the fact that listening to music (live, or prerecorded) should (hopefully) invoke an emotional response in the subject whereby the bioregulatory process creates somatic markers associated with memory, and those memory markers are used by the brain for future decision making and consequence evaluation. If the notion of emotional response is suppressed through the testing protocol in the testing of components whose function it is to deliver emotion/intention recorded as sound then it allows a lesser notion to gain preference in the hierarchy of decision making.

Harman would like us to believe this is on-axis/off-axis frequency response curves. They use several speakers whose measurements are taken using steady-state signals to compile a spatial average. They then play music (amplitude and pitch over time) and ask for subjective preference ratings where each speaker is rated out of ten. They combine the objective and subjective data, find a correlation between measurements and preference and - Viola! Audio science tells us all we need to know about choosing our speakers.

But does it? No, it tells us that people will rate a speaker highly in a comparative environment where the speakers under evaluation vary wildly in their spatial averaging creating differences to be detected rather than a judgement in which the consequences must be considered. Well, duh. Rating differences is not the same as making decisions because decisions involve emotion and consequence and there is no emotional facility required in rating a speaker out of 10 based on comparative tonal differentiation. Just detection of difference and giving the degree of difference a number.

Is Harman taking three or four speakers that have the same or similar spatial averaging and asking for subjective evaluations? No. If it did, the subject would be forced to evaluate the speakers based on other, less measurable, potentially ephemeral and wholly subjective variables, like for instance, how it made them feel, which can be measured via skin conductance response, heart rate, respiratory frequency, skin temperature and facial EMG and - shock, horror - possibly lead to competing manufacturer’s products being rated more highly.

What then? I’d imagine we would see exactly what we see on this forum - the vast majority of us don’t have Harman speakers.

Did we buy them with our eyes, then? Our egos? Our lack of erectile functioning? Very possibly so. We can’t exclude those biases from our decision making processes.

But that’s not to say that Harman’s testing is not problematic, and further investigation from independent researchers wouldn't be valuable.
 
Last edited:
I am speaking for myself. I am not asking you to believe what I write. I am asking you to respect that others may not share your vision of audio science and that we have perfectly valid (not illogical) reasons for our opinions, based on our own individual experiences.

The Harmon tests work on preference and as such this is primarily marketing science, not what I would call audio science. Audio science would involve comparisons to see if differences can be perceived, for example in the case of speakers it would involve referencing live instruments vs. playback of those live instruments. (Hard to do.) This is much easier to do with some electronic comparisons, e.g. line level preamplifiers or ADC - DAC digital loops, software format conversion of recordings of jangling keys, etc...

Training is not necessarily the answer, either. If there are two types of distortion, A and B, and brand X products have type A distortion and competitive brand Y products have type B distortion, then training listeners to easily recognize and identify type A distortion may be a good marketing strategy for the owners of brand Y.

Nice post. The highlighted sentence reminds me of a story I heard from the owner of my pair of Symdex Gamma Speakers. I don't know if Kevin Voecks of Revel was involved at the time with Symdex or before, but this is what I was told: To judge the accuracy of the Symdex speaker, the designers used two nearly identical rooms. One room had a performer playing into a mic and the other room had a Symdex speaker (one or a pair) being fed a direct feed from that mic in the same location in the second room. Listeners were asked to compare the sounds in the two rooms. The point was that they were trying to make as direct a comparison to live as they could to evaluate the sound of their speaker design using a live source. I wish I knew more of the details of that test.
 
You are arguing a very different point. The debate is whether what we hear at home can approach what one would have heard in the real venue. It is not whether elements "survive" the transformation through production of finished music. Of course elements do that. A female singer doesn't become a male singer in the process :). The question is if I magically did an A/B between you sitting in the live presentation and through a stereo system, would they be indistinguishable. The answer is that you can't remotely, in a million years, achieve that with a stereo system with music as is produced today. No microphone captures the room. Music is mixed, EQed, transformed, etc. To say nothing of the fact that tonally you have no idea how close your system is to the original one that was used in the production of music that the talent heard.

This is the concept that we need to understand and forever put behind us the #1 marketing technique used to sell us audio gear, dating back to the days of Edison claiming the same "live" music reproduction.

That is what we buy as consumers. But no, that does not mean in any way or shape that if our equipment achieves nirvana, it somehow remotely matches what was heard in a live session. We are hearing a completely different presentation of the original art.

I was showing the frequency response of two loudspeakers: one that is a top brand in professional music production circles (Genelec) and one in consumer (Wilson). When fed the identical signal, both reproduce wildly different tonal response. That tonal response is a linear transformation that everyone, no matter how critical their hearing, can hear and differentiate. No way would the sound heard by the Genelec speaker be the same if you replaced it in the same production room with the Wilson. Therefore we have already radically changed what we hear versus what the talent heard. So no way can we say that we are replicating the "live" experience if in this case we forgive and forget the recording process that led to the final stereo mix.

In other words, there are two major barriers to us claiming that we are approximating the live experience. The first one is that no one attempted, nor is it possible to capture a live session with 100% transparency into two loudspeakers. And two, that the production and playback chains have no standards that would make the sound the same. These combined mean that no way, no how do you ever, no matter how good you think your system is, are hearing anything like what was heard either in the live recording session, or what final stereo production.

Sorry, missed this.

Er... Probably just best if I say firstly, an omni does a pretty good of capturing the room. 2 crossed figure-8's at 90 degrees do. M/S can. Depends on the mic and its polar pattern and what you're trying to record and in what room. Some of us use the fanciful term "reach" in describing how a mic captures a room, but that's dirty talk, I know.

And secondly, if you read my previous posts you'll discover I come down pretty firmly on the side of believing no hi-fi system approaches live in its totality. But aspects of live - musically meaningful and immeasurable things like presence, immediacy, vitality, touch, texture - heard some pretty convincing ones, mostly vinyl and often through tubes and horns. But that's dirty talk, I know.
 
I don't know these recordings and can't speak to them, but if they are all you listen to, by all means, buy speakers that roll off as these recordings rise, and achieve balance... Perhaps Mercury understands its audience and is aiming their product at system with significant high frequency roll-off, or listeners with significant high-frequency hearing loss.

Tim

Dude, you're retired now. You should get out more.

P.S. OK, just 'cause I feel sorry for you: http://www.stereophile.com/content/fine-art-mercury-living-presence-recordings#RDciiILLL8EzUWJZ.97
 
This argument about spatial realism has been repeated here many times, and given what's available to us, I don't understand how it keeps getting traction. Yes, stereo can present a very pleasant sense of space, but no matter how good your system is, the odds are very high that you've very rarely even heard it attempt to create realistic spatial reproduction. I could be wrong though, it happens. Help me with this. Would those of you making the argument of live listening as reference please list for me the number of recordings in your collection that you know were recorded direct to tape or digital, from a single pair of microphones in a listening position in a performance venue?

Tim

I'm not making an argument for live listening but I think you'd love Chris Whitley's Dirt Floor. Recorded by Craig Street with one Speiden ribbon stereo mic in his dad's workshop. Still my favourite album of his.

Cowboy Junkies Trinity Sessions, but that used the Calrec Ambisonic Microphone, which is technically four capsules in one mic, and Margot Timmins actually sang into an SM58 through a PA to better compete with the rest of the band, but hey, who's counting?

All the Naim early releases were done by Ken Christianson through two AKG 414 EBs, though I no longer have any of those recordings.

Early BIS were all two mics into a Studer.

All of Tim Berne's Bloodcount recordings were two mics hung from the ceiling.

Water Lily Acoustics' Kavi Alexander used Pearl ELM-8 and C mics almost exclusively in Blumlein. If you've not heard A Meeting by the River by Ry Cooder & Pandit Vishwa Mohan Bhatt, I highly recommend you track it down.
 
Perhaps I should have said it's the only one we have. There are few standards for the recording/mastering/playback chain, and that is regrettable, resulting in quite a few bad recordings. But the recording is all we have; it is all that the system sees. The system isn't aware of your concert listening experience, your taste in music, your preferences in tonality. It is blissfully unaware of your room. All it knows is the in-coming signal, and it will reproduce it within its limitations. It can't do anything else...unless you add some sort of eq or processing.



I don't know these recordings and can't speak to them, but if they are all you listen to, by all means, buy speakers that roll off as these recordings rise, and achieve balance. If they are not all you listen to, I trust you understand that other recordings will sound dull and "rolled off" on this system. I'd suggest that ugly stepchild of audiophila; equalization. Digital would be even better, as you could program a pre-set for these CDs and simply hit the bipass button when playing something that is better recorded. Do you play vinyl? How does that sound on a system optimized for these CDs?



I'm a musician, and I've done exactly what you're suggesting quite a few times. Record with a single pair of stereo mics, then play it back through a quality monitoring system in the same room from the same position and you do get a much better understanding of how accurate your recordings are. That doesn't make me a first class audio citizen, but it does make me very skeptical of those who talk about realism without accuracy. Move that monitoring system to a different room, and the recording will sound different. Master that recording with a "substantial high frequency roll-off" and it will, of course, sound substantially different, even in the same room and position.

Recordings are, more often than not, engineered for the market they'll be sold to. So many popular recordings are compressed because most listeners these days listen through headphones, iPod docks and car stereos. The closest thing they get to the kind of listening experience audiophiles have is when they are rolling down the road with a noise floor coming up through the floorboard that would make the worst home system sound quiet. Perhaps Mercury understands its audience and is aiming their product at system with significant high frequency roll-off, or listeners with significant high-frequency hearing loss.

Tim

Given that you are an audiophile, I must assume that you aren't a classical music lover, because it would be inconceivable that a classical music lover - audiophile would be unfamiliar with the Mercury Living Presence recordings. Back in the early 1960's when I first got into hi-fi my two heroes were Robert Fine (Mercury Records) and Lewis Layton (RCA). One could be assured that any recording (LP or prerecorded tape) made by these engineers would be excellent.

I guess it's a good thing you are a musician and not a mastering engineer. If you want to see a good discussion of what's involved in mastering recordings and getting the equalization right I suggest you read Bob Katz' book "Mastering Audio: the art and the science". Take particular note of Chapter 6, Monitor Quality. Here it suggests that high frequency adjustment needs to be done by ear using a reference set of at least two dozen reference recordings. Indeed, that is what I did and of those the CD Mercury Living Presence recordings were extreme outliers, being brighter than any other reference recording. With my house curve these recordings are still slightly bright, but not objectionably so. The best recordings have perfectly accurate tonality, e.g. the Ivan Fischer Mahler 9 has a near perfect tonal balance, with all the instruments sounding much like one would hear in row 10 to row 20 in a great concert hall. If all I had were the Mercury living presence recordings I would probably have turned down the high frequencies another half dB or so at 10 kHz.

Boosting high frequencies is not a solution for hearing loss and it will give an inaccurate impression. The mind provides all the needed boost as the ears age, particularly if one continues to go to those recalibration clinics known as live acoustic concerts.
 
Discriminating between two variables to identify difference is vastly different from discriminating between two variables in which we are required to apply judgement (i.e. preference).

The former requires the removal of bias, while the latter is dependent upon it. Kahneman & Tversky; Keltner & Lowenstien; Loewenstien & Lerner; Bachara, Damsaio & Damsio have all published in relation to the primacy of emotion on our ability to make decisions requiring judgement.

Discerning a difference does not require any emotional facility. In fact, as ABX testing has repeatedly shown, emotion is an inhibitor to discrimination, and the purpose of ABX testing is simply to establish a statistically significant level of confidence in discrimination - not, it should be noted, an empirically significant one in which the subject’s intentionality is factored in (i.e. they could be guessing/lying and pass).

Bachara, Damasio & Damasio have shown that subjects with deficiencies in their pre-frontal cortex (the area of the brain that processes emotion) have difficulty making decisions requiring judgement, even down to simple tasks such as what clothes to dress in. (This was of particular interest to me as I spent two years as the co-ordinator of a youth/young adult drug rehab where delaying gratification and the ability to discern future consequences was imperative to their chances of long term recovery.)

What’s more, Bachara et al believe the amygdala (the source of processing memory, decision making and emotion) is using emotion to modulate memory and bias decisions, of which there is no separation between the two functions.

Therefore, any test designed to suppress emotional response from the subject will lead not only to an impairment in judgement (biases) but an impairment in memory access/storage.

This is problematic in light of the fact that listening to music (live, or prerecorded) should (hopefully) invoke an emotional response in the subject whereby the bioregulatory process creates somatic markers associated with memory, and those memory markers are used by the brain for future decision making and consequence evaluation. If the notion of emotional response is suppressed through the testing protocol in the testing of components whose function it is to deliver emotion/intention recorded as sound then it allows a lesser notion to gain preference in the hierarchy of decision making.

Harman would like us to believe this is on-axis/off-axis frequency response curves. They use several speakers whose measurements are taken using steady-state signals to compile a spatial average. They then play music (amplitude and pitch over time) and ask for subjective preference ratings where each speaker is rated out of ten. They combine the objective and subjective data, find a correlation between measurements and preference and - Viola! Audio science tells us all we need to know about choosing our speakers.

But does it? No, it tells us that people will rate a speaker highly in a comparative environment where the speakers under evaluation vary wildly in their spatial averaging creating differences to be detected rather than a judgement in which the consequences must be considered. Well, duh. Rating differences is not the same as making decisions because decisions involve emotion and consequence and there is no emotional facility required in rating a speaker out of 10 based on comparative tonal differentiation. Just detection of difference.

Is Harman taking three or four speakers that have the same or similar spatial averaging and asking for subjective evaluations? No. If it did, the subject would be forced to evaluate the speakers based on other, less measurable, potentially ephemeral and wholly subjective variables, like for instance, how it made them feel, which can be measured via skin conductance response, heart rate, respiratory frequency, skin temperature and facial EMG and - shock, horror - possibly lead to competing manufacturer’s products being rated more highly.

What then? I’d imagine we would see exactly what we see on this forum - the vast majority of us don’t have Harman speakers.

Did we buy them with our eyes, then? Our egos? Our lack of erectile functioning? Very possibly so. We can’t exclude those biases from our decision making processes.

But that’s not to say that Harman’s testing is not problematic, and further investigation from independent researchers wouldn't be valuable.

There is so much wrong with this I couldn't reply until I stopped laughing.

I'll just pick one thing. You say they don't compare speakers that are similar because it might lead to a competitor winning. When they actually use these comparisons to make sure their designs score better. Not to falsely beat a competitor in the test. Winning the rigged test would only work if those results were widely publicized. Instead they have mostly been coy and reluctant to say what they test against. They then test their own designs to improve them. They have publicized what work they do to show they have a different, and what they believe is a better approach.

Finally, go listen to some of the resulting designs. Maybe you will find their approach is excellent. Maybe not.
 
They don't make them like that anymore today.

What would be the closest to that approach; Reference Recordings and Channel Classics music record labels?

* In reply to post number 721
 
There is so much wrong with this I couldn't reply until I stopped laughing.

I'll just pick one thing. You say they don't compare speakers that are similar because it might lead to a competitor winning. When they actually use these comparisons to make sure their designs score better. Not to falsely beat a competitor in the test. Winning the rigged test would only work if those results were widely publicized. Instead they have mostly been coy and reluctant to say what they test against. They then test their own designs to improve them. They have publicized what work they do to show they have a different, and what they believe is a better approach.

Finally, go listen to some of the resulting designs. Maybe you will find their approach is excellent. Maybe not.

Pick all of them. I'm keen to learn. Edumacate me.
 
Harman would like us to believe this is on-axis/off-axis frequency response curves. They use several speakers whose measurements are taken using steady-state signals to compile a spatial average. They then play music (amplitude and pitch over time) and ask for subjective preference ratings where each speaker is rated out of ten. They combine the objective and subjective data, find a correlation between measurements and preference and - Viola! Audio science tells us all we need to know about choosing our speakers.

But does it? No, it tells us that people will rate a speaker highly in a comparative environment where the speakers under evaluation vary wildly in their spatial averaging creating differences to be detected rather than a judgement in which the consequences must be considered. Well, duh. Rating differences is not the same as making decisions because decisions involve emotion and consequence and there is no emotional facility required in rating a speaker out of 10 based on comparative tonal differentiation. Just detection of difference and giving the degree of difference a number.
That (tonality) is not what you are asked to score. You are asked to score your overall preference for one loudspeaker versus another. Again, this is what we all do day in and day out. We listen and compare and give an opinion of what we like better. You are free to use whatever metric you want in giving that final score. Use emotion, logic, analysis, whatever. You are not given any bounds just like you do in sighted evaluation.

Is Harman taking three or four speakers that have the same or similar spatial averaging and asking for subjective evaluations? No. If it did, the subject would be forced to evaluate the speakers based on other, less measurable, potentially ephemeral and wholly subjective variables, like for instance, how it made them feel, which can be measured via skin conductance response, heart rate, respiratory frequency, skin temperature and facial EMG and - shock, horror - possibly lead to competing manufacturer’s products being rated more highly.
You mean if someone presents same or similar spatial averaging loudspeakers to me in a sighted evaluation like we all do, I can't rely on my ears? I will be relying on other senses alone like skin conductance? Can you give an example of two different loudspeakers where my ear would say they sound the same but these other senses say different? And why I can't do the same evaluation in these tests?

What then? I’d imagine we would see exactly what we see on this forum - the vast majority of us don’t have Harman speakers.
Remember, the #1 factor in sales of loudspeakers is marketing. Nothing remotely approaches the power of that. So let's not use commercial success in this context. It belies the reality of how the market works.

Did we buy them with our eyes, then? Our egos? Our lack of erectile functioning? Very possibly so. We can’t exclude those biases from our decision making processes.
Of course. Those fall in the category of audio business and marketing as I mentioned. They are powerful factors in their own right in influencing purchasing decisions. The job of the NRC/Harman research is not to read on those business strategy but sound. To the extent we walk around saying we buy things based on fidelity of how something sounds to our ear, you know the "live" experience, then we can't turn around and say let's throw that factor out and decide on other metrics.

But that’s not to say that Harman’s testing is not problematic, and further investigation from independent researchers wouldn't be valuable.
Many loudspeaker desigenrs rely on this research in producing their loudspeakers. Their value therefore is established, lack of awareness of high-end consumers notwithstanding.
 
They don't make them like that anymore today.

What would be the closest to that approach; Reference Recordings and Channel Classics music record labels?

* In reply to post number 721

I find that the Channel Classics recordings, especially those made with the Grimm ADC and played back in DSD without PCM down sampling, outclass the classic stereo recordings of the late 1950's and early 1960's when it comes to sonics. The classic early recordings were limited by the available magnetic recording technology.
 
I find that the Channel Classics recordings, especially those made with the Grimm ADC and played back in DSD without PCM down sampling, outclass the classic stereo recordings of the late 1950's and early 1960's when it comes to sonics. The classic early recordings were limited by the available magnetic recording technology.

Just curious, any idea how many records were direct to disc in the 50s and 60s?
I appreciate it is not common, just curious how prevalent it has been over the years and whether it was acknowledged to provide a benefit (I recently did a post about a double LP soon to be released where one is direct to disc and the other is traditional from master tape - both use the same performance with the feed split).
Cheers
Orb
 
Remember, the #1 factor in sales of loudspeakers is marketing. Nothing remotely approaches the power of that. So let's not use commercial success in this context. It belies the reality of how the market works.

And who has more money to spend on marketing than Harman?
 
Status
Not open for further replies.

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing