Conclusive "Proof" that higher resolution audio sounds different

What it says is that mastering engineers (sometimes) used more 'audiophile' practices for these 'new formats'*.
No kidding. Why not spell out 1+1 being two. That is not and was not the discussion. Here is what you said:

One of several things that strikes me as funny about the 'Meyer&Moran stink' crowd is that before M&M, few were making a fuss about the plain fact that many praised-to-the-skies 'high rez' releases were just rereleases of analog (tape) recordings*. Reviewers would still go ga-ga over the new improved 'high rez' sound -- and attribute it to the format. But when M&M used those (as well as 'pure' high rez) recordings in their tests , suddenly they'd committed an invalidating fail.

And this is the quote from the report again:

Though our tests failed to substantiate the claimed advantages of high-resolution encoding for two-channel audio, one trend became obvious very quickly and held up throughout our testing: virtually all of the SACD and DVD-A recordings sounded better than most CDs—sometimes much better. Had we not “degraded” the sound to CD quality and blind-tested for audible differences, we would have been tempted to ascribe this sonic superiority to the recording processes used to make them.

[...]These recordings seem to have been made with great care and manifest affection, by engineers trying to please themselves and their peers. They sound like it, label after label. High-resolution audio discs do not have the overwhelming majority of the program material crammed into the top 20 (or even 10) dB of the available dynamic range, as so many CDs today do."


Meyer and Moran clearly confirm that the praise for the fidelity of new releases had merit. And further sanction determining that using sighted, subjective evaluation. So the excitement among audiophiles had mertic accoding to them contrary to what you say in your post above.

The rest of the paper provides evidence for the idea (which is thoroughly supported by the actual technical capacities of REdbook vs SACD and DVD-A) taht those 'audiophile' practices could be used on CD too, with virtually identical perceptual result.
That is your theory. In this thread we showed that simple resampling of audio can be detected in double blind tests using ABX methodology. The only way to avoid that degradation is to not touch the bits.

The relevance of the paper and your arguments have come and gone anyway. There is no new physical format to talk about. Labels are releasing high res audio and not waiting for you to sanction them. Nor have you or anyone else has ever made a case why we should get reduced resolution bits.

Your posts remind me of sole soldiers stuck on islands thinking some war is still going on years after ceasefire. It is hard for them to accept that their role in the battle has become irrelevant. So we get to relive such matters over and over again.
 
If Redbook mastering 'doesn't work well' in your hands, maybe you're doing it wrong?

...

I can only comment on my own personal experience. I took tapes (reel to reel and cassette) and digitized them and then did various operations to "improve" their sound, or at least what I thought improved their sound. The end result was a file at 96/24 or 88/24 that sounded better than the original analog tape. (In those cases where the original analog tape was of high quality I did a straight digital transfer.) Then when I was quite familiar with this sound, I used the best available software for converting hi-res down to the 44/16 format (iZotope 64 bit SRC and mbit+ dither as in the RX product). The downsampling results were seldom transparent. I tried all kinds of parameter settings as to filters, dither algorithms, etc., but the results were usually unsatisfying. In the end, I realized that by spending lots of time I might make a particular downsampling that improved slightly on my usual parameter settings. But then I realized that the time spent doing this optimization was wasted, as none of the end customers for these releases were audiophiles and would have cared in the slightest for the differences that I was worrying about. I kept my hires master files in each case and these are what I listen to for my own personal use. If any of the customers were to complain about sound quality of the 44/16 "masters" my first response would be to give them a copy of the hi-res master from which the 44/16 version was produced.

Another way of putting this is that I consider the 44/16 files that I produce to be lower than studio quality. They are a degraded version, in that regard not dissimilar to an MP3. Amir has previously explained that hi-res masters are the norm and that there is no economic reason to waste the time and effort to degrade quality down to a lower standard. This is has been obvious to me for at least the past five years. In addition to doing digital transfers and remastering I am a webmaster for a website that sells downloads (44/16 lossless and MP3). I also run a personal web site that makes downloads available of piano performances made by my late wife. As a result, I am aware of the economics associated with the distribution of hi-res music. There is simply no economic reason to degrade sound quality to save bits with today's technology, even if only a fraction of recordings actually benefit from higher resolution (which has not been my experience at all, where 50% of cassette tape recordings benefit from higher digital resolution).
 
those 'audiophile' practices could be used on CD too, with virtually identical perceptual result. Which of course is true - 'audiophile' practices HAVE been used on CDs, to give wide dynamic range and broad bandwidth, sourced from good digital or original master analog recordings.

This is true but is oftentimes, much more the exception than the rule.
 
It certainly has bearing, since sighted listening is 'much lauded' as a valid way of "proving" the audibility of anything, by Stereophile, and indeed the bulk of the 'audiophile'community, and the hi-end industry itself. It, and not DBT is, in fact, the standard operating procedure. Mike Lavigne even made that same point in this thread, though he seems to find it a point of pride.

The inherent unreliability of sighted listening is the ten-ton gorilla in the room, always. And you are one of its caretakers.

From the people here who spend so much energy rooting out and denigrating 'bad' DBTs, I'd like to know this:

how much evidentiary value, if any, do you place on the vastly more common method: sighted listening?
This is what you wanted us to respond to? If I am using expert listeners whose job is to evaluate audio, I put a lot of weight on their sighted opinion. Oh, don't be shocked. You know how many times JJ took me to our listening room at Microsoft and played samples for me sighted asking what I thought? Expert listeners can be wrong but they are far less swayed by typical biases of others.

In general, 99% of the listening tests in the industry are sighted. Double blind tests are time consuming and are only used to double check the results. We would not have gotten MP3 yet if we were just sitting there performing double blind tests every time an encoder change was made.

Maybe you are asking how much weight I put on average audiophile's sighted evaluation. In that case, I pay attention to it. If it violates what I know I can prove otherwise, then I put no weight on it. If I cannot make such proof, then I consider the possibilities. We had a thread here where people said LP demag would improve the sound. I thought it was a silly argument and showed every which way that there was no way it could make a difference. Then Gary made a recording before and after and my jaw fell off when I heard the clear difference. So I investigated the cause, had Gary run more tests for us leading to the conclusion that the difference we were hearing was actually generational. And that the second generation playback of a new LP actually sounded better than the first. We all learned something which is a lot more than I can say for your kind of back and forth.

Watch JJ's video that Arny post earlier. Pay attention to the advice he gives at the end to not ignore anecdotal evidence. That is how we learn. Not by having a closed mind, believing some hobby audio test and stuff you read online. Here it is again:


My question to you is why you think you are so right when this is not your professional field? Can you answer that? Can I learn your science by just wasting time on forums? Why do you think this is such a trivial field as to be summarized by a single audio test?
 
I must admit the last few pages of this oftentimes very yawn-inducing thread has been most entertaining. Keep it up. :)
 
We're actually still waiting for DBTs that demonstrate when and how 'normal levels of jitter' are audible, contra Ashihara. You got 'em? IIRC Amir's been asked but he won't tell.
(And actually Ashihara et al. is not the only piece of publication artillary brandished in the jitter wars. You're doing the brave fighters an injustice to imply that.
Since you're in a strangely combative mood for some reason, I'll bite. Ashihari et al made a simple but serious mistake in their experiment - do you know what it is? Its so simple, but I've not seen anyone spot it. The damage it has done has set HiFi back years by people arguing that high levels of jitter are inaudible.

Those that trust their ears have known better all along. :D

BTW, I gave an example of a reliable sighted listening test in my last post, one which confounded enormous listener bias. Its all documented for posterity.

Your posts remind me of sole soldiers stuck on islands thinking some war is still going on years after ceasefire. It is hard for them to accept that their role in the battle has become irrelevant. So we get to relive such matters over and over again.
That sums it up beautifully.

Keep smiling, Nick
 
We're actually still waiting for DBTs that demonstrate when and how 'normal levels of jitter' are audible, contra Ashihara. You got 'em? IIRC Amir's been asked but he won't tell.
If that is your question, then you need to go back and study what jitter is. There is no such thing as "normal levels of jitter." It is like asking, "what are the normal levels of bacteria which can be harmful in food." There is no one bacteria. Nor is there one kind of "jitter" to evaluate. People who assume there are, are folks like Ashihara who thought random jitter is the type of jitter to test. Never mind that rudimentary understanding of signal processing says random jitter just adds random noise to a file. It doesn't add "jitter" of any real product.

This is one kind of jitter from a DAC:

i-8X8ksRF-XL.png


Showed that to Arny and he says that is audible. You think it is not and needs a listening test?

This is the spectrum of an AVR:

i-RWk9jJx-X2.png


And if I showed you another AVR, it would be yet again different. So just like bacteria, there is no one thing called jitter.

Is it my job to go and show these profiles of jitter are audible? No. I am retired from the industry. I write about topics that I like. I like signal processing, audio theory, engineering and research. You are a fan of listening tests but I have yet to see you show the results of a single test you have run. Why don't you go and run the tests? Why not run the tests in this thread and report back how you did?

Our fellow Aussie member and staunch objectivists whose presence I miss, had this to say to Arny on my first discussion on jitter in AVS Forum:

well, in that case let me thank you [Amir] for your contributions. I KNOW I could not have kept my patience as you have, let alone maintained a sense of humour! It's funny how hard *we* can go to maintain our rightness, and how quickly that line is crossed where we no longer wish to learn (despite our objections to the contrary) where we fight tooth and nail...usually because we know our position is so tenuous that the slightest 'loss' means the whole game is over.

FFS, Amir has sat here page after page and SHOWN how, and under what possible conditions jitter may be audible. Hey, if it were a cable debate, and we showed with maths and sims that there could not possibly be a difference, well that would have proved it no? So why the **** in an 'argument' where the shoe is on the other foot does it suddenly become irrelevant what the science says??

My take on what the fear might be is the worry of what might happen if we concede a point of argument. The 'other side' will drive a frickin lorry thru the door if we do. I mean, there only has to be ONE person who hears a power cord (for sake of illustration) in what seems to be a proper test and the whole frickin lot of the rest of them will claim it as proof that they too can hear it.

No they can't, 'one in a million' means just that. But we KNOW every single one of them thinks they can hear it, using that person as proof, and even less urge to test the truth properly. After all it has been shown. So, we had better clamp down HARD on the one ever coming out, if only to keep the lid on the rest.

So, move on to something far less controversial than PCs, but as long as it falls into audiofool territory we had better clamp down on that too. It is just safer that way, keep each and every genie in the bottle. So the need to put amir in his place, and keep the lid hammered on tight. Because the ramifications of this little argument go waaaay past it's tiny borders.

""Oh, but amir has not given any evidence of audibilty"" (apart from the science you mean? The science that would be perfectly acceptable in a different argument, that the one we are talking about???).


Be totally honest here. If he told you that he had found, to his satisfaction, that turning the front panel on and off on his thingamabob had an audible difference, would you accept that? What then his findings of jitter? We know you would not accept his results, the genie is too terrifying to contemplate.

So don't come back at me with 'amir has yet to show audibility' ok? It is a definitional thing you know. Some things, by definition, are inaudible.

Bit like cancer, it cannot be cured hence any cure of cancer is untrue (why we are always then exhorted to donate to cancer research is beyond me). All of you could be right, it may be completely inaudible. But you sure as hell have not shown it by your arguments. Unless 'nanah nanah nah' counts as an argument.


So now all that matters are flawed listening tests run by people who didn't understand the fundamental theory of what they were testing, and how they should have performed the listening tests. Their conclusions may very well be right, but we would lose any "objectivity" credentials we might have by waving their flag and dismissing measurements and explanation of the science as immaterial.
 
Which is another funny thing about the hi rez cheerleaders. Their claims range from 'CD just sounds awful' to 'some CDs sound extremely good...*very close to* hi rez'. Though that's as close as they'll get to admitting that maybe, just maybe, sometimes, CDs can sound *just as good as* their 'hi rez' counterparts. It's like the vinyl crowd, really.

Given a system of reasonable transparency, it's quite easy to determine, in a "sighted" environment, that some RBCD's are equal to and / or sonically superior to some SACD's. So what?

As most people know, the final sonic quality of a CD or record is totally dependant on all the recording chain / mastering / production "details".
 
I used the best available software for converting hi-res down to the 44/16 format (iZotope 64 bit SRC and mbit+ dither as in the RX product). The downsampling results were seldom transparent.
Have you tried the iZotope SRC only, without reducing the wordlength to 16 bit ? I've found iZotope SRC to be transparent below 21kHz. Dither is another story.
 
Have you tried the iZotope SRC only, without reducing the wordlength to 16 bit ? I've found iZotope SRC to be transparent below 21kHz. Dither is another story.

I do the conversions in two steps. It is possible to hear degradation after the first step (88/24 to 44/24) and further degradation after the second step (44/24 to 44/16). Various choices (filter slope, linear phase / minimum phase, filter offset, dither type, dither amount, noise shaping amount) all can affect the final sound quality, but generally in unique ways, with some interaction between parameter settings. It's been some time since I've wasted much time exploring this space. For me, 44/16 is dead to me as a quality medium and is relevant only in terms of existing recordings which are only available in this format.
 
It is possible to hear degradation after the first step (88/24 to 44/24) and further degradation after the second step (44/24 to 44/16).
The interesting thing is that iZotope SRC artifacts are below -150dBFS in the (<21kHz) passband. This should not be audible, unless the monitoring adds audible distortion, or unless you can hear above 21 kHz.
 
The interesting thing is that iZotope SRC artifacts are below -150dBFS in the (<21kHz) passband. This should not be audible, unless the monitoring adds audible distortion, or unless you can hear above 21 kHz.

I can't hear sine waves above about 13 kHz. (Fifty years ago I could hear "21 kc".) When I did the last tests I was using RX2, which wouldn't null below about -70 dBfs because of sub sample delays. When I get a chance I will try null tests and listening tests with RX4 Advanced.

As far as I am concerned, it is an open question whether I was hearing artifacts or not. When I did listening tests, I converted from 88.2/24 to 44/24 and back to 88.2/24 and auditioned the two 88.2/24 files. That way the only effects were created by the two sample rate conversions. However, other people have reported that they could hear differences in two 24 bit files that were identical except that 1 lsb was added to each sample, no clipping involved. This was a minute DC offset that would be eliminated by the time it got very far, certainly through the air. I suspect that there were DAC artifacts that created any audible differences.

By the way, SRC artifacts at -150 dBfs are very different depending on how they are measured. If they are measured in a typical spectrum plot this is only about 18 bits of resolution. If they are measured with peak amplitude then this is an impressively low difference that is unlikely to be audible since it is below the thermal air motion at one's eardrums. If they are measured RMS then it is possible that the error might be audible if there are artifacts that have higher peaks and are averaged out.
 
I must admit the last few pages of this oftentimes very yawn-inducing thread has been most entertaining. Keep it up. :)

Must agree, very interesting recent posts.
I'm also recently discovering indications that the level of noise stability within DACs is far more important psycho-acoustically than has generally been acknowledged. I have not seen any studies published in this area but I may be wrong.

What I'm experiencing may be related to the “spectral compensation effect” i.e. the ability to listen through a fixed colouration of the sound (from speakers or room effects) through to the changing spectrum of the music. However, if this is no longer a fixed colouration (i.e. modulated noise) we can no longer do this - the modulation becomes part of the changing spectrum (I believe Opus11 had much to say on this - a member who I personally miss from this forum).

This may have relevance to the observations in the above posts on SRC? It also has relevance to the jitter post where fixed noise is the general outcome of such random jitter whereas, it is generally agreed that correlated jitter at much lower levels is more noticeable.

I see a paper at the upcoming AES may have relevance to this "P8-2 Auditory Compensation for Spectral Coloration" & look forward to reading it. I suspect that it is a furtherance of this 2013 AES paper "Auditory adaptation to loudspeaker and listening room acoustics"

Edit: Also Tony brings up a good point about measurements & their interpretation. I would also like to add that FFTs are often misinterpreted i.e it's not a noise floor that we see in an FFT (it's bin energy) - to derive the correct noise floor a correction factor that depends on the FFT bin width and the window used needs to be applied. So it's really not surprising that noise in audio is perhaps an overlooked area.
 
Last edited:
Must agree, very interesting recent posts.
I'm also recently discovering indications that the level of noise stability within DACs is far more important psycho-acoustically than has generally been acknowledged. I have not seen any studies published in this area but I may be wrong.

What I'm experiencing may be related to the “spectral compensation effect” i.e. the ability to listen through a fixed colouration of the sound (from speakers or room effects) through to the changing spectrum of the music. However, if this is no longer a fixed colouration (i.e. modulated noise) we can no longer do this - the modulation becomes part of the changing spectrum (I believe Opus11 had much to say on this - a member who I personally miss from this forum).

This may have relevance to the observations in the above posts on SRC? It also has relevance to the jitter post where fixed noise is the general outcome of such random jitter whereas, it is generally agreed that correlated jitter at much lower levels is more noticeable.

I see a paper at the upcoming AES may have relevance to this "P8-2 Auditory Compensation for Spectral Coloration" & look forward to reading it.

Edit: Also Tony brings up a good point about measurements & their interpretation. I would also like to add that FFTs are often misinterpreted i.e it's not a noise floor that we see in an FFT (it's bin energy) - to derive the correct noise floor a correction factor that depends on the FFT bin width and the window used needs to be applied. So it's really not surprising that noise in audio is perhaps an overlooked area.

I'm not sure exactly what you mean by "noise stability" so I would appreciate clarification.

I'm 100% with you on the question of noise modulation. I consider uncorrelated noise to be benign, provided it is at a sufficiently low level. However, correlated noise is distortion and the type of correlated noise created by DACs is potentially ugly. If you look at DAC chip data sheets and published DAC measurements it is often possible to see certain types of noise modulation by comparing graphs. There are DAC architecture techniques that convert correlated noise into uncorrelated noise, at the cost of extra hardware and possibly a higher total noise floor. I suspect chip designers often make tradeoffs to make their product "measure" better at the expense of potentially sounding worse. (DAC designers of high end products marketed to subjective listeners don't commit this particular design "sin" since they eschew published measurements.)
 
I'm not sure exactly what you mean by "noise stability" so I would appreciate clarification.
Sorry, I mean reduction in noise modulation & probably also level of noise floor.

I'm 100% with you on the question of noise modulation. I consider uncorrelated noise to be benign, provided it is at a sufficiently low level. However, correlated noise is distortion and the type of correlated noise created by DACs is potentially ugly. If you look at DAC chip data sheets and published DAC measurements it is often possible to see certain types of noise modulation by comparing graphs.
Indeed, the ESS presentation by Mallinson was an interesting demonstration of the noise modulation seemingly inherent in the common delta-sigma DAC architecture - meaning 99% of all current audio DAC chips.
There are DAC architecture techniques that convert correlated noise into uncorrelated noise, at the cost of extra hardware and possibly a higher total noise floor.
Are you talking about de-glitching and/or other techniques ?
I suspect chip designers often make tradeoffs to make their product "measure" better at the expense of potentially sounding worse.
Agree 100%
(DAC designers of high end products marketed to subjective listeners don't commit this particular design "sin" since they eschew published measurements.)
Indeed but of course, that is perfect fodder for the measureists groups & cries of snake-oil salesmen ensue
 
Are you talking about de-glitching and/or other techniques ?

One technique to decorrelate errors from the signal is to use multiple DAC chips fed with signals that sum to the desired output, but where each DAC gets what is individually nearly a pseudo-random signal. In the absence of any errors there will be no added noise, but the signal input needs to be reduced to allow for headroom; consequently the signal to noise ratio will be degraded. I believe this technique was used with R2R DACs. I first learned of this at a talk by Barry Blesser, back in the 1980's as I recall. There are other techniques used to convert errors into noise, such as the dynamic element matching used in the SABRE chip multi-bit DAC. This algorithm also uses deglitching; I suspect the algorithm has noise/distortion tradeoffs. (See the white paper and the patent.)

Last time I looked at a SABRE chip spec sheet, I still saw variations in the noise floor as a function of signal. In other words, ESS has reduced the level of noise modulation, but not eliminated it. Furthermore, even if the noise floor remained constant there could still be signal-dependent artifacts that wouldn't show up on a plot but which could be obtained by doing sophisticated correlation measurements, something that spooks involved with Tempest consider when trying to ex-filtrate crypto keys.

Rather than using averaging techniques that inevitably obscure subtle differences that are not observable using unsophisticated measurements, a better test would be waveform accuracy. One evaluates a DAC as if it were a component of a modem. One puts bits in, gets an analog signal out, feeds that to an ADC and compares the difference between input samples and output samples. This may not be practical at the state of the art because ADCs are harder to build than DACs. However, if one is evaluating the digital portions of a DAC, e.g. a sigma-delta modulator, one can do all the processing digitally to any desired level of accuracy. If one does this one may be shocked at the poor performance. For example, one published 5 th order sigma-delta modulator didn't come anywhere close to providing 16 bit accuracy, even when reproducing a constant waveform. (I ran this test for a few seconds worth of "audio" and took the maximum absolute error signal as my metric, i.e. I took the L-infinity norm. It was necessary to chop off the beginning and end of the error stream because it contained filter artifacts from the filter that I used to band limit the output signal.)
 
Indeed but of course, that is perfect fodder for the measureists groups & cries of snake-oil salesmen ensue

Only for those who are so inclined and believe measurements, DBT, ABX, etc. are the final arbiters of good sound.
 
Rather than using averaging techniques that inevitably obscure subtle differences that are not observable using unsophisticated measurements, a better test would be waveform accuracy. One evaluates a DAC as if it were a component of a modem. One puts bits in, gets an analog signal out, feeds that to an ADC and compares the difference between input samples and output samples. This may not be practical at the state of the art because ADCs are harder to build than DACs. However, if one is evaluating the digital portions of a DAC, e.g. a sigma-delta modulator, one can do all the processing digitally to any desired level of accuracy. If one does this one may be shocked at the poor performance. For example, one published 5 th order sigma-delta modulator didn't come anywhere close to providing 16 bit accuracy, even when reproducing a constant waveform. (I ran this test for a few seconds worth of "audio" and took the maximum absolute error signal as my metric, i.e. I took the L-infinity norm. It was necessary to chop off the beginning and end of the error stream because it contained filter artifacts from the filter that I used to band limit the output signal.)

Will taking the L-infinity norm fully correct for timing shift between sample points from going DAC to ADC?
 
Will taking the L-infinity norm fully correct for timing shift between sample points from going DAC to ADC?

No, if there are timing errors you will see them in all the norms. For any differences to matter you will have to correct timing down to a tiny fraction of a sample time unless your input signal is very low frequency or DC. This is unlikely to work in the analog case unless the DAC and ADC clocks can be kept synchronized.

If you use an editor like Soundforge you can get the time series of the sample differences by mixing the two files out of phase. At this point you can run the statistics function, which will show the average value (L-1 norm) the RMS value (L-2 norm) and largest absolute value (L-infinity norm). Actually, Soundforge shows two values, the maximum positive value and the minimum negative value rather than a single number for the L-infinity norm.

Keep in mind that the L-infinity norm gives a pessimistic appraisal of differences, whereas the more commonly used RMS norm gives an optimistic appraisal. One can chose one's norms according to one's agenda if one is trying to lie by statistics. :)
 
No, if there are timing errors you will see them in all the norms. For any differences to matter you will have to correct timing down to a tiny fraction of a sample time unless your input signal is very low frequency or DC. This is unlikely to work in the analog case unless the DAC and ADC clocks can be kept synchronized.

If you use an editor like Soundforge you can get the time series of the sample differences by mixing the two files out of phase. At this point you can run the statistics function, which will show the average value (L-1 norm) the RMS value (L-2 norm) and largest absolute value (L-infinity norm). Actually, Soundforge shows two values, the maximum positive value and the minimum negative value rather than a single number for the L-infinity norm.

Keep in mind that the L-infinity norm gives a pessimistic appraisal of differences, whereas the more commonly used RMS norm gives an optimistic appraisal. One can chose one's norms according to one's agenda if one is trying to lie by statistics. :)

Well was wondering if I was missing something there. Even if you compare bits in to bits out via a loop through analog with the clocks locked together you can get enough timing error to start corrupting results. A meter of cable connecting the two causes something over 3 nanoseconds of time difference which will effect higher frequencies enough to be out of the noise floor in the residuals.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Co-Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing