How good are today's microphones at capturing air pushed by the music?

I do more research for this site than I do for work.
 
The question of the thread was: are microphones good at capturing air pushed by the music? I guess it depends on how you define “air.” Normally when I think of air, I think of the sound of cymbals cutting through the air such that you actually hear the air being displaced, pushed, moved, etc. Using that definition, I would say the answer is yes. My system excels with reproducing “air.” And I think some of that is due to my speakers being bipolar and radiating high frequencies from the front and back. When I listen to almost any jazz recording, there is plenty of air.

Of course you can also take “capturing air pushed by the music” to include other things besides the air moved by cymbals. And Myles has a point too, and that is microphones aren’t capturing everything there is to be captured and that’s why that recorded music doesn’t sound exactly like live music. However, I have been thinking about this more and more lately and I have been wondering about other causes that lie in our recording chain.

Everyone loves to talk about the evils of compression in recordings and it certainly can be evil. However, the way our ears hear sound live with no compression and the resultant dynamic range we hear seems to be different than the way we hear sounds that have been recorded. And what I mean by that is that our ears have pretty incredible dynamic range and we can hear way below the noise floor of the ambient noise floor that exists in nature. I think that changes once sound has been recorded and now we are trying to hear the dynamic range that is not only bumping up against the ambient noise floor of mother nature, but also the noise floor of the electronics that are making the recording. I think sometimes that compression in recordings helps us hear the dynamic range of what is captured on the recording assuming it’s not overdone. And I know that sounds like heresy, and it may well be.

I want to give an example. I downloaded the hi-rez uncompressed file of Band on the Run. I was thinking at the time who the hell would buy the compressed version when you could buy the uncompressed version? I listened to it and it sucked, and I don’t think it sucked because it was digital. It sounded bad to me because it almost sounded like it had no dynamic range because the difference between the softest sounds and the loudest sounds were too spaced out in time so most of the time, it just sounded very quiet. If this was heard live, your ear would have discriminated much better and the effect would have been different.

Now this could be entirely different if we had a recording of someone playing live vs. a multi-track recording. In fact, I don’t think any of the vaunted Sheffield Labs D2D recordings have any compression other than the gain riding that has to occur in order not to blow the lacquer when they are cutting it in real time and that is a form of compression.

I’m starting to think that a small amount of compression is necessary in order to set the level of dynamic range we will hear in a given recording and make the dynamic range sound more natural to our ears. Whether that is the fault of the microphones or the recording gear/medium, I don’t know. It would be helpful if Bruce would chime in here and give us his $.02. I could be dead wrong in my thinking.
 
That why I suggested reading the books I listed :)
Sorry, I'm not going to buy a book just to find a simple explanation for what you're talking of. There has to be a simple, layman's description somewhere on the net for this apparent mechanism. It's easy to find astonishing performance figures for mics: capable of handling continuous 140dB sound levels, dynamic range of 120dB, noise levels at close to 0dB, substantially better than most people ears work at, so talking of a loss of 25% is just ridiculous to me ...

Frank
 
Of course you can also take “capturing air pushed by the music” to include other things besides the air moved by cymbals. And Myles has a point too, and that is microphones aren’t capturing everything there is to be captured and that’s why that recorded music doesn’t sound exactly like live music. However, I have been thinking about this more and more lately and I have been wondering about other causes that lie in our recording chain.
Some fascinating points in there, Mark, plenty to respond to! Firstly, it HAS been captured by the mics, and, yes, frequently when played back it doesn't sound real. But it has absolutely nothing to do with the recording chain, and all to do with the playback chain!

The simple answer: most systems aren't capable of going loud without distorting badly, you can't get away from the fact that live sound is loud, and no matter how you fiddle with it, if you don't CORRECTLY reproduce that loudness you'll never get a system to sound real on playback. Yes, it's very easy on most setups to generate a bellowing cacophany of sound, all very heady and initially impressive, but it ain't the real deal! That's not what real music is like, and unless you aim to get a system working without hurling wads of distortion noise into the room it ain't gonna happen ...

Frank
 
By whom is it commonly accepted that microphones "lose" 20 - 25% of what information?

Tim
 
Run out and buy Pat Metheny's "what's it all about" album and you'll see just how good present day mics can be. This recording was made using a AMT mic and it is just stunning.
 
Hey Myles -the mikes are good enough to pick up the sound and direction of trains traveling under the recording studio that was apparently missed by the recording engineers.:)
 
Hey Myles -the mikes are good enough to pick up the sound and direction of trains traveling under the recording studio that was apparently missed by the recording engineers.:)

Lol! Listen to the living stereo recording "Rhapsodies" Stokowski,you can hear the subway trains underneath,Tristan & Isolde track. It's probably more clear on the original and not the SACD version.
 
Frankie, Frankie your definition of distortion is over reaching buddy.

Like loudspeakers Mics have frequency response curves and polar patterns. Like loudspeakers if you want ruler flat, you use them within the flattest part of their operating range and that of their EQ (if accompanied). The polar pattern determines phase characteristics. Again like a loudspeaker just twisting them can make a big difference in sound quality. Rule of thumb is dynamic microphones handle high SPL well. Large condensers (which are actually electrostatics, that's what the phantom power is for) handle capture midrange and HF nuances best but are easily overloaded. Ribbons capture gobs of air. Sound familiar?

So just as you are assembling a loudspeaker you choose the driver you think will do the job. If you were to use just one mic recording an event in mono even if it is an "omni" coverage is never really truly spherical. Add this to amplitude losses from distance coupled with frequency related losses of that distance and transmission through air as well as cancellations naturally occurring prior to pick up and the 25% to 30% loss becomes understandable. Of course you won't have this at 1m but we're not talking about specs here, we're talking about actual use.

Lastly, I think every audio enthusiast owes it to himself to find a way to listen to raw microphone feeds. Particularly pre-mix feeds all playing at once out of a stereo bus. These represent each mic recorded at optimum level for S/N and placement rationale. You could say this is the point of lowest distortion in the recording chain. What you would also find however is that 99.99999999% of the time it will be a disjointed mess with every feed shouting over each other. Hit the solo and it sounds near perfect. Disengage solos and it's chaos.

Much as we love our gear and pray endlessly for "flat and accurate" we can't take the human factor out because we are human. Even in a system designed and performing as close to this platonic ideal, look at the time averaged FR of tracks Bruce has posted. The plot of different songs do not resemble F-M curves. They resemble B&K curves for "preferred" frequency response. Higher in the bass with a gentle slope down to the treble.

The recording is really what the recording team (artists, engineers, producers) committed to as what THEY thought sounded "right" at the time or what THEY thought was the best that could be done, if they actually cared of course. ;)
 
Frankie, Frankie your definition of distortion is over reaching buddy.
Okay, Jack, I gonna go technical on ya: from an EE's point of view there is linear distortion, FR and phase abberations, and non-linear distortion, which is everything else, all that IM and nasty stuff that is a measurement of frequencies that shouldn't be there at all, in a particular place in the recording. A difference in amplitude of everything, all at once, in perfect "alignment", is NOT a distortion. So I am still at loss, understanding what this microphone "loss" is :)?

Linear distortion is fine, no big deal; non-linear is the baddy, you want to get rid of every last trace of him to get good sound ...

Particularly pre-mix feeds all playing at once out of a stereo bus. These represent each mic recorded at optimum level for S/N and placement rationale. You could say this is the point of lowest distortion in the recording chain. What you would also find however is that 99.99999999% of the time it will be a disjointed mess with every feed shouting over each other
If the feed is reduced so the amp and transducer, headphones say, are not overloading then it should not sound a mess. It may sound unbalanced, like listening to a live jazz trio with your head right in the middle of all the drums, but it shouldn't sound a mess. If that were the case, why would a real drummer ever bother playing with other musos, sounds a mess from his perspective, might as well go home and play by yourself ...

Frank
 
There is some interesting R&D that fits in perfectly with the title of the thread :)
I think it is a combined effort between an academic lab and commercial venture that is investigating the use of gas and laser for capturing music as a mic; you cannot get closer to the title of the thread than that :)
Going back a few years now and they had a working prototype but it still needed a lot of further research and development, thermal dynamics-etc are not a problem with the design, very interesting take on doing a mic.
No idea if they will ever reach a level that would be deemed as high fidelity or usable by studios and others.

Cheers
Orb
 
You're looking only at the signal Frank. Remember that mics are transducers so we're looking at pre-transduction acoustic losses. Whatever it converts is whole....the unadulterated signal good or bad.

In the close mic'd scenario, precisely it WOULD be imbalanced. Even if the playback chain is on song, as you would put it, you would be getting each channel at a close mic'd perspective. That would definitely be a mess for the brain to work out because nobody has ever had his head from inches to mere feet from everything playing at once, not even band members. Even in purely acoustic music the physical arrangement of musicians is always a consideration whether vis-a-vis a live audience or a microphone array. Close mic'd multi-track lessened this to a degree but it also spawned the need for artificial effects to be added later on like reverbs and delays to make it sound natural ie. how it would sound in life. In live amplified music, it DOES sound like a mess to the musicians. That's why they have monitors. In big concerts they even have dedicated mixing desks just for monitoring. They have to hear themselves play. :)
 
There is some interesting R&D that fits in perfectly with the title of the thread :)
I think it is a combined effort between an academic lab and commercial venture that is investigating the use of gas and laser for capturing music as a mic; you cannot get closer to the title of the thread than that :)
Going back a few years now and they had a working prototype but it still needed a lot of further research and development, thermal dynamics-etc are not a problem with the design, very interesting take on doing a mic.
No idea if they will ever reach a level that would be deemed as high fidelity or usable by studios and others.

Cheers
Orb

If I'm not mistaken Edison was able to record sound using light. The intelligence community has had laser microphones for decades. It would be interesting to see what the white coats and ventures will come out with. :)
 
Thanks Jack for the additional info - never knew it was already in use from a practical standpoint even if quality is not ideal.
They (ones I mention) can be used to record music, but last I read about the R&D it was rather crude in its quality.

Cheers
 
You're looking only at the signal Frank. Remember that mics are transducers so we're looking at pre-transduction acoustic losses. Whatever it converts is whole....the unadulterated signal good or bad.
We still haven't got a definition of "losses", I'm afraid -- it's too general a term. Let's replace a mic at a certain position by someone's ear facing the same direction: what's being "lost" now, is there a difference in the nature of that "defect"?

In the close mic'd scenario, precisely it WOULD be imbalanced. Even if the playback chain is on song, as you would put it, you would be getting each channel at a close mic'd perspective. That would definitely be a mess for the brain to work out because nobody has ever had his head from inches to mere feet from everything playing at once, not even band members. Even in purely acoustic music the physical arrangement of musicians is always a consideration whether vis-a-vis a live audience or a microphone array. Close mic'd multi-track lessened this to a degree but it also spawned the need for artificial effects to be added later on like reverbs and delays to make it sound natural ie. how it would sound in life.
Good point: the lack of a linking acoustic signature, or the normal enhancement by natural reflections would most likely make it hard for the ear/brain.

In live amplified music, it DOES sound like a mess to the musicians. That's why they have monitors. In big concerts they even have dedicated mixing desks just for monitoring. They have to hear themselves play. :)
Fair enough, but when I talk of listening to live music I never consider there to be a PA component, only the individual amps for instruments that need them to be functional, like electric guitars and pianos. Facinating, isn't it, that no-one ever says that a musician's amp doesn't sound real, but as soon as another amp and speaker handle that same sound in playback it becomes a source of huge debate ...

Frank
 
We still haven't got a definition of "losses", I'm afraid -- it's too general a term. Let's replace a mic at a certain position by someone's ear facing the same direction: what's being "lost" now, is there a difference in the nature of that "defect"?


Good point: the lack of a linking acoustic signature, or the normal enhancement by natural reflections would most likely make it hard for the ear/brain.


Fair enough, but when I talk of listening to live music I never consider there to be a PA component, only the individual amps for instruments that need them to be functional, like electric guitars and pianos. Facinating, isn't it, that no-one ever says that a musician's amp doesn't sound real, but as soon as another amp and speaker handle that same sound in playback it becomes a source of huge debate ...

Frank

Same thing that's lost when you move away. 6dB for every doubling of distance. It isn't linear across frequencies either. You also lose the reflections blocked by your noggin coming from the other side and those warped by your face on one side and pinnae on the other. That kind of loss frank. That's what I was saying about the polar pattern not being truly spherical even in an omni. There's always a bit of directionality which means there are hot spots and dead spots. The losses are the dead spots. All together these are summarized as the polar response and fr response patterns. One can be a 3 wood and another a Sand Wedge. You just gotta use what the situation calls for in CREATING the signal. This is so because the diaphragm itself isn't spherical and unless you could make it self contained, equally sensitive at all frequencies at all degrees and levitate there'll always be something blocking it somewhere. :)

A guitar amp sounds real live because it is real. It is what it is. My speakers and system aren't Marshals so making it or any system for that matter, sound convincingly like one would require quite a bit of work dontcha think?
 
Same thing that's lost when you move away. 6dB for every doubling of distance. It isn't linear across frequencies either.
Okay, we're just talking about amplitude losses then, with straighforward linear distortion via FR variation. Something that the ear/brain has to handle all the time in real life, therefore there is no problem with the mics capturing the sound sufficiently for the mind to perfectly recreate the musical event, if the playback system is working correctly.

A guitar amp sounds real live because it is real. It is what it is. My speakers and system aren't Marshals so making it or any system for that matter, sound convincingly like one would require quite a bit of work dontcha think?
Jack, what comes out a Marshall amp's speakers is sound, vibrations of the air, just like the human voice or a piano initiates, produced in fact by fairly crappy speaker drivers, by the standards of high end speaker systems. There is nothing magic about that sound (unless you're a Marshall freak, perhaps), it can be broken down by frequency analysis into its component parts and looked at like any other "noise". It has distinctive tonal qualities, and tends to be subjectively loud and intense, but that's all there is to it. If a high quality playback system can go loud cleanly it should have no trouble reproducing that sound completely accurately. Or do you believe in magic, perhaps? :)

Frank
 
Of course you won't have this at 1m but we're not talking about specs here, we're talking about actual use.

1m is about right for actual use of a microphone. I'm not questioning the notion that microphones aren't perfect, but this notion that it is widely accepted that they lose 25% of the information just sounds like something someone pulled out of their...hat. What information? Which mics? Widely accepted by whom? I've been using mics all my life and this one is new to me. They are all imperfect, though some are awfully good at capturing everything when used properly. But they are all pretty different too, have different applications, different imperfections. I'd really like to know where that common knowledge came from.

Tim
 
It was rule of thumb in the days before close miked multi-tracking Tim.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu