Okay, best I can tell, the guy took music in a 96/24 format. Converted it to 44/16 and then back to 96/24. He then subtracted the converted result from the original 96/24 file to see what differences were left. To determine what distortion was in 44/16 compared to the original. The differences he shows as an FFT are not what would result. They might be right if your conversion software alters volume levels (which a good many do) by a fraction of a db. So an FFT of such a difference isn't any distortion it is simply a mistake in procedure. Such an exercise done properly will show nothing below 20 khz except very low level noise from dithering done during the conversions. Plus high frequency info in the original that was filtered out when it was a 44 khz file. So a fail for step one.
He then uses, best I understand him, this FFT info to create a graph of distortion by frequency for CD. Of course that isn't what it is for one thing. It isn't comparable to an FFT of a totally different signal either.
He then does a conventional harmonic distortion measure of sine wave on an LP. Creates a graph of distortion for that which he compares to a graph of distortion for CD while the CD chart isn't really a distortion graph in the first place.
Yet his descriptions and graphs really look to be garbled or missing info about what he did or what they represent. Regardless of any of that, his first step of 96 to 44 and back to 96 conversion appears to be botched which invalidates everything thereafter.
Thanks for that explanation, very helpful.