Sorry, I'm not going to buy a book just to find a simple explanation for what you're talking of. There has to be a simple, layman's description somewhere on the net for this apparent mechanism. It's easy to find astonishing performance figures for mics: capable of handling continuous 140dB sound levels, dynamic range of 120dB, noise levels at close to 0dB, substantially better than most people ears work at, so talking of a loss of 25% is just ridiculous to me ...That why I suggested reading the books I listed
Some fascinating points in there, Mark, plenty to respond to! Firstly, it HAS been captured by the mics, and, yes, frequently when played back it doesn't sound real. But it has absolutely nothing to do with the recording chain, and all to do with the playback chain!Of course you can also take “capturing air pushed by the music” to include other things besides the air moved by cymbals. And Myles has a point too, and that is microphones aren’t capturing everything there is to be captured and that’s why that recorded music doesn’t sound exactly like live music. However, I have been thinking about this more and more lately and I have been wondering about other causes that lie in our recording chain.
Hey Myles -the mikes are good enough to pick up the sound and direction of trains traveling under the recording studio that was apparently missed by the recording engineers.
Okay, Jack, I gonna go technical on ya: from an EE's point of view there is linear distortion, FR and phase abberations, and non-linear distortion, which is everything else, all that IM and nasty stuff that is a measurement of frequencies that shouldn't be there at all, in a particular place in the recording. A difference in amplitude of everything, all at once, in perfect "alignment", is NOT a distortion. So I am still at loss, understanding what this microphone "loss" is ?Frankie, Frankie your definition of distortion is over reaching buddy.
If the feed is reduced so the amp and transducer, headphones say, are not overloading then it should not sound a mess. It may sound unbalanced, like listening to a live jazz trio with your head right in the middle of all the drums, but it shouldn't sound a mess. If that were the case, why would a real drummer ever bother playing with other musos, sounds a mess from his perspective, might as well go home and play by yourself ...Particularly pre-mix feeds all playing at once out of a stereo bus. These represent each mic recorded at optimum level for S/N and placement rationale. You could say this is the point of lowest distortion in the recording chain. What you would also find however is that 99.99999999% of the time it will be a disjointed mess with every feed shouting over each other
There is some interesting R&D that fits in perfectly with the title of the thread
I think it is a combined effort between an academic lab and commercial venture that is investigating the use of gas and laser for capturing music as a mic; you cannot get closer to the title of the thread than that
Going back a few years now and they had a working prototype but it still needed a lot of further research and development, thermal dynamics-etc are not a problem with the design, very interesting take on doing a mic.
No idea if they will ever reach a level that would be deemed as high fidelity or usable by studios and others.
Cheers
Orb
We still haven't got a definition of "losses", I'm afraid -- it's too general a term. Let's replace a mic at a certain position by someone's ear facing the same direction: what's being "lost" now, is there a difference in the nature of that "defect"?You're looking only at the signal Frank. Remember that mics are transducers so we're looking at pre-transduction acoustic losses. Whatever it converts is whole....the unadulterated signal good or bad.
Good point: the lack of a linking acoustic signature, or the normal enhancement by natural reflections would most likely make it hard for the ear/brain.In the close mic'd scenario, precisely it WOULD be imbalanced. Even if the playback chain is on song, as you would put it, you would be getting each channel at a close mic'd perspective. That would definitely be a mess for the brain to work out because nobody has ever had his head from inches to mere feet from everything playing at once, not even band members. Even in purely acoustic music the physical arrangement of musicians is always a consideration whether vis-a-vis a live audience or a microphone array. Close mic'd multi-track lessened this to a degree but it also spawned the need for artificial effects to be added later on like reverbs and delays to make it sound natural ie. how it would sound in life.
Fair enough, but when I talk of listening to live music I never consider there to be a PA component, only the individual amps for instruments that need them to be functional, like electric guitars and pianos. Facinating, isn't it, that no-one ever says that a musician's amp doesn't sound real, but as soon as another amp and speaker handle that same sound in playback it becomes a source of huge debate ...In live amplified music, it DOES sound like a mess to the musicians. That's why they have monitors. In big concerts they even have dedicated mixing desks just for monitoring. They have to hear themselves play.
We still haven't got a definition of "losses", I'm afraid -- it's too general a term. Let's replace a mic at a certain position by someone's ear facing the same direction: what's being "lost" now, is there a difference in the nature of that "defect"?
Good point: the lack of a linking acoustic signature, or the normal enhancement by natural reflections would most likely make it hard for the ear/brain.
Fair enough, but when I talk of listening to live music I never consider there to be a PA component, only the individual amps for instruments that need them to be functional, like electric guitars and pianos. Facinating, isn't it, that no-one ever says that a musician's amp doesn't sound real, but as soon as another amp and speaker handle that same sound in playback it becomes a source of huge debate ...
Frank
Okay, we're just talking about amplitude losses then, with straighforward linear distortion via FR variation. Something that the ear/brain has to handle all the time in real life, therefore there is no problem with the mics capturing the sound sufficiently for the mind to perfectly recreate the musical event, if the playback system is working correctly.Same thing that's lost when you move away. 6dB for every doubling of distance. It isn't linear across frequencies either.
Jack, what comes out a Marshall amp's speakers is sound, vibrations of the air, just like the human voice or a piano initiates, produced in fact by fairly crappy speaker drivers, by the standards of high end speaker systems. There is nothing magic about that sound (unless you're a Marshall freak, perhaps), it can be broken down by frequency analysis into its component parts and looked at like any other "noise". It has distinctive tonal qualities, and tends to be subjectively loud and intense, but that's all there is to it. If a high quality playback system can go loud cleanly it should have no trouble reproducing that sound completely accurately. Or do you believe in magic, perhaps?A guitar amp sounds real live because it is real. It is what it is. My speakers and system aren't Marshals so making it or any system for that matter, sound convincingly like one would require quite a bit of work dontcha think?
Of course you won't have this at 1m but we're not talking about specs here, we're talking about actual use.
It was rule of thumb in the days before close miked multi-tracking Tim.