Amir explained my position quite well (thanks Amir!) However, we're talking about at least two different things here, and this is going to get deeper than most might appreciate (though barely entering the engineering realm)...
1. Impedance mismatches that cause signal variations that move the edges and zero-crossings used to extract the clock from the data stream. This is what cables can impact; the length of the cable affects when those mismatches arrive at the source or load. If a "bounce" happens in the middle of the "1" or "0", and assuming it is not too large, then there is negligible impact. If the bounce hits during a transition (edge, change in state), then it changes the effective bit period and thus the clock frequency. This effectively causes deterministic jitter.
You can adjust where the bounce hits by adjusting the cable length,
at a given frequency. Thus, these "magic" cable lengths. Catch is, it's not just the length that matters; there are other parameters of importance even in the cable, and of course we (consumers) do not know the actual source and load impedances nor length of internal wiring from connector to the internal buffers, let alone the detailed characteristics of the waveforms, so in reality I find such claims somewhat dubious. How's that for waffling?
That means cable length might have an effect, but the "best" length is likely to be a little different for every system.
2. The clock derived from the data stream is fed as the clock to the DAC. Note that I have caused some confusion; as a designer of ADC and DAC chips, the clock recovery circuit is not in my purview. When I say "DAC" I am talking about the actual DAC core; not the clocking circuits, not the input data buffers, nor even the output buffer circuit. I will generally (always, hopefully) state explicitly when I include those circuits. Critical as they are, they are not, to me, "the DAC". Sorry about that! Now, there is some sort of clock recovery circuit, and most often that includes a phase-locked loop (PLL) of some sort. As the name implies, it seeks to match a oscillator it controls to an external input signal by aligning their phase. In practice, that means looking at the edges. The data bits can be taken off part of the PLL, and the clock from another. The input has very wide bandwidth, but the output (control) bandwidth to the oscillator is very low.
PLL design for clock and data recovery is a compromise among fast signal capture (wide bandwidth), low sensitivity to incoming noise (low bandwidth), and all the internal parameters (myriads). Broadband noise will impact the phase detector at the front end; very low frequency noise will fall within the PLL's bandwidth and cause low-frequency pumping (this is the 100 Hz that was mentioned earlier). The main problem, IMO after ten second's thought, is not random jitter. That the PLL will reject pretty durn well, and besides just raises the noise floor a little. We will not notice that (at least most of us won't and for reasonable random jitter levels).
The real problem is deterministic jitter. That type is related (correlated) to the signal and/or clock, and that puts "beat patterns" into the clock that the PLL will not reject. Any transmission system with finite bandwidth (that means anything we can make) will have some amount of deterministic jitter and inter-symbol interference (ISI). The latter is when one bit affects the next, and the one after that, because in the real world there is a little memory in every system. That moves the clock around a little with the signal, and that causes spurs (distortion signals) to pop up out of the noise. This is shown in those jitter threads I keep referring to, off in the technical forum here on WBF.
Sorry for going off the deep end, hope it helps - Don