So, why insist on an extra box, provided the reason is not input switching?
I have two hypotheses.
1. The benefits are due to the added isolation. An interim stage with separate power and an added degree of rf reduction.
2. It's an effects box and a good blend of various effects is musically beneficial.
Perhaps it is a combination of both but wth a bias towards 2.
Here are some observations.
Not many seem to like a good resistive attenuator, per example a ladder made from a Shallco switch and z-foil resistors. This has practically no measurable distortion and microphony and with lowish resistor values, very low noise.
What is universally liked is either an active gain stage, or a high distortion passive attenuator, per example magnetic, having all sorts of distortion mechanisms, especially at low frequencies, or based on photo resistors, which of course have a rich harmonic spectrum.
As Einstein put it,
Make everything as simple as possible, but not simpler
Your hypotheses are incorrect. Here's what is going on (sorry for the long post):
A passive system can be too simple- and is highly system dependent. You have to be very careful about cables, output impedances of sources and input impedances of amps. A common problem is anything less than full volume results in a loss of impact, frequently more noticeable in the bass.
A preamp thus has four functions, two of which a passive cannot provide:
1) provide input switching
2) provide volume and balance control
3) provide any needed gain (very handy if you are running a tape or LP system)
4)
control the effects of the interconnect cables (if you've ever auditioned cables and heard a difference between them, you know what I'm talking about).
Of these, 4) is least understood, even by preamp designers. And we have a multi billion dollar/year cable industry here in the US that thrives on this ignorance.
Personally I find it annoying to audition cables to get the 'right' one. The technology to prevent interconnect cable artifacts was laid down in the late 1940s and its only been the need for cheap home audio equipment in the 1950s that it was not adopted in the home as well. I am talking about the balanced line system, without which the golden age of stereo would never have flowered. This is because to run a 50 or 100 foot long microphone cable to get the microphones perfectly placed needs an interconnect cable that won't color the sound. The transformers needed are why this system wasn't used in the home- transformers are expensive.
Now Robert Fulton is the guy who back in the 1970s started the whole high end audio cable business. I knew him and it really rankled that I could hear his interconnect cable actually sounding better than the average fare at the time.
But plain and simple its throwing money at the problem rather than expertise. Plus the recording companies and prior to that, the phone company, had already provided the expertise that eliminated interconnect cable problems, but as I said it was 'too expensive' for the home.
These days it is not. High end audio is all about how good it can get and isn't driven by cost- its driven by intent.
Of course the balanced line system, unlike single-ended RCA cables, has a connection standard. These days its known as AES48. In addition to AES48, the balanced line system is also low impedance. In the old days, the 600 Ohm standard came from the spacing of conductors in free air; those three wires you see on older telephone poles (these days more likely seen on back 2-lane hiways). Those three wires are the inverted, non-inverted connections plus ground. Although the 600 Ohm bit has gone the way in studio gear, input impedances are still kept low to help swamp out cable artifact (caused by capacitance and its dielectrics as well as inductance).
Most preamps I've seen that are balanced do not support AES48 and they certainly aren't low output impedance (I've seen some that can't drive less than 30KOhms!). To support AES48 the source can't reference ground in any way. In the old days this was ususally done with a line transformer with no center tap (its a common myth that to do balanced line you need a center tap on a transformer to do it). A center tap reduces Common Mode Rejection Ratio and opens the equipment up to ground loops, the prevention of which is also one of the goals of the balanced line system. The transformer also allows for low impedance operation.
Because the balanced line standard is rarely supported in high end audio to this day, you get this rather silly conversation about which is better, single-ended or balanced. That would never happen if the balanced connection was properly implemented.
Here are some tenants of the balanced standard:
1) the source does not reference ground (examples include phono cartridges and tape heads)
2) the non-inverting output of pin 2 is not generated with respect to ground, its generated with respect to pin 3 which is the inverting output. This can be satisfied by the secondary winding of an output transformer that merely has its connections to pin 2 and 3 of the XLR output, with pin 1 (ground) being the chassis.
3) the signal travels in a twisted pair in a cable, with a shield (the latter being tied to pin 1).
4) the impedance to ground of either connection will be the same. This is why transformers can work so well, since its impedance to ground is near infinity. But differential amplifiers can be used as inputs with impedances to ground that are considerably lower; what's important here is that CMRR (Common Mode Rejection Ratio) be kept high.
5) the source impedance and the input impedances are all kept low- at a bare minimum the source should be a low impedance and should be able to drive 1 or 2 KOhms with no worries.
Common problems with balanced line equipment in high end audio:
1) the RCA outputs are connected to the XLR connections used as outputs. IOW the RCA connection is simply one of the XLR output pins. This means that the XLR outputs, pins 2 and 3 are referencing ground. This opens the circuit up to ground loops since ground currents are not ignored. This also means that the construction of the interconnect cable might be a lot more important (hence also more expensive). If such a connection is used, using both pin 2 and pin 3 together will yield 6dB greater output since the output voltage is doubled. In a proper balanced line connection this does not happen (another common myth debunked here)!
2) the output impedance of the source (for example, a preamp) is high. This means it can only drive high impedances as well. Remember that 600 Ohm thing I mentioned? If there is an output (male) XLR connection on the equipment, it really should be able to drive loads as low as 1 or 2KOhms.
Obviously, a passive control can't support the balanced standard. So you're stuck having to audition cables, often running shorter distances and having to carefully watch impedances to make sure the source can drive the amplifier properly with the passive inserted. Its simply too simple.
I get the whole KISS thing! If you want to do a simple system, there are two ways around this problem. The first is if you have a DAC, that the active circuitry in it includes a good volume control system and input switching, since you need a bit of gain to make a DAC work, that gain could be used with other sources too- and then make sure that its supports AES48. The second way is to build the volume control into the amplifier. You'll need a remote if running monoblocks... Neither system has won over anyone in the last 40 years of digital so don't hold your breath.
So as a result active line stages still have a role to play.