The sort of topic of this thread is about timbre, sounds that are similar in ways besides pitch and loudness. Its about the steady part of the tone and the spectrum change over time and the initial attack as far as what helps us decode timbre. We can tell timbre from a tone that has the fundamental filtered out by the ears ability to pattern recognize. So, I would say that redbook can do pretty good at it, for guys that cant hear past 10 or 12 kHz anyway, for those that can hear out to 20khz, and the timbre has these frequencies, then yes, we all know that redbook is close but not fully there and that guy might know that timbre has changed some due to the cutoff at 20khz or so in redbook.
Yes, if we 'believe' in the theory of sampled digital audio (with dither) then it should be pretty straightforward: we lose everything above 20 kHz so dogs and a small proportion of young children may spot a change - but surely they will notice much bigger changes of timbre deriving from the phase distortion of uncorrected speakers. Again, if we 'believe', then the only other artefact is a tiny, tiny amount of random noise.
Personally, I do 'believe' the theory. For the believers among us, discussions about high res versus CD format are moot - the only difference is whether the amount of random noise is very very tiny, or very very very tiny. I believe so strongly in the theory, that if I were to hear a difference between CD and high res, I would 'know' that it was either my ears/mood/expectation bias, or that one of the systems was defective in some area not related to bit depth or sample rate. Life is pretty simple when you accept the science!