Early Fisher and other stereo amplifiers only changed the phase of one speaker to allow easy correction of out of phase speaker wiring without disconnecting the wire at one end. At that time absolute phase wasn't even considered.
Looking at how objects vibrate, they fall into two categories. For string and percussion instruments the term absolute phase has no meaning. Different parts of the vibrating element that creates sound are moving in opposite directions at the same time. This means depending on where you are in relation to the instrument the first sound you hear from it may be either a compression wave or a rarifaction wave. Try walking around a harp player who makes sound by plucking strings. Can you tell by listening to a single note alone repeated again and again which side of the harp you are on? If you walk around a violin or cello while It is playing can you tell if all of the notes are only upbow or downbow which side of the instrument you are on? No because if you could see the string in slow motion it would look like wiggling wet spaghetti, different parts of it moving in different directions at the same time. Same for struck instruments like a drum head. The membrane of a struck drum will look like the surface of the ocean, some parts are moving up while other parts are moving down. This is how it creates harmonics. The mathmatical description is called a Bessel function.
However, for spoken word, singing, or horns (brass and reed instruments) the first arriving wave is always a compression wave. This is because sound is made by them by exhaling, never inhaling. Some people are sensitive to this difference, some aren't.
For older recordings before this became a concern, you probably have about a 50-50 chance of "getting it right." With multimiking, you may find some instruments on the same recording are in phase while others are out of phase. This is because in the recording/mastering/record cutting process, the signal passes through many amplifier gain stages. For a common cathode tube circuit and a common emitter transistor circuit, the most widely used amplifier configurations, anywhere along the way, there is a phase inversion between the input to that gain stage and the output. How many of them were in the circuit and how the microphone was wired are the determining factors. With typically up to 24 channels, where modules could be bypassed or inserted in the circuit it's the luck of the dice.
Phase and polarity are directly related because when a speaker is wired with one polarity, a positive going voltage will cause the cone to move forward creating a compression wave. With the speaker wired opposite it will move backwards away from you creating a rarifaction wave. The overall result is the net total of ALL of the gain stages with their inversions from the microphone output to your speaker input. Does it matter? I suppose if you are exquisitely sensitive to it it could although this is a relatively insignificant distortion for most people if they can hear it at all I think. Personally for me, I can't hear the difference.
Thus an inability to hear a given thing may have the capacity to invalidate some given arguments about perception. For the absence of inherent capacity to ruminate upon a given thing is in the most basic sense...missing. As mentioned in another place, the base system of the ear/brain is plastic, not set in stone. If it was set in stone, we'd be, as humans, irrevocably hardwired unchanging boxes of wiring that fell off a cliff a long time ago. Of course, we tend to have components of that aspect, which is tied to some of humanity's issues, as it where, but overall, plastic and mutable. ie, capacity to change and learn. That you can teach yourself absolute phase, as an act of hearing and recognizing, if you desire to do so.