Introducing Olympus & Olympus I/O - A new perspective on modern music playback

Taiko-Olympus-big-advert.png

For those who just started reading up on Olympus, Olympus I/O, and XDMI, please note that all information in this thread has been summarized in a single PDF document that can be downloaded from the Taiko Website.

https://taikoaudio.com/taiko-2020/taiko-audio-downloads

The document is frequently updated.

Scroll down to the 'XDMI, Olympus Music Server, Olympus I/O' section and click 'XDMI, Olympus, Olympus I/O Product Introduction & FAQ' to download the latest version.

Good morning WBF!​


We are introducing the culmination of close to 4 years of research and development. As a bona fide IT/tech nerd with a passion for music, I have always been intrigued by the potential of leveraging the most modern of technologies in order to create a better music playback experience. This, amongst others, led to the creation of our popular, perhaps even revolutionary, Extreme music server 5 years ago, which we have been steadily improving and updating with new technologies throughout its life cycle. Today I feel we can safely claim it's holding its ground against the onslaught of new server releases from other companies, and we are committed to keep improving it for years to come.

We are introducing a new server model called the Olympus. Hierarchically, it positions itself above the Extreme. It does provide quite a different music experience than the Extreme, or any other server I've heard, for that matter. Conventional audiophile descriptions such as sound staging, dynamics, color palette, etc, fall short to describe this difference. It does not sound digital or analog, I would be inclined to describe it as coming closer to the intended (or unintended) performance of the recording engineer.

Committed to keeping the Extreme as current as possible, we are introducing a second product called the Olympus I/O. This is an external upgrade to the Extreme containing a significant part of the Olympus technology, allowing it to come near, though not entirely at, Olympus performance levels. The Olympus I/O can even be added to the Olympus itself to elevate its performance even further, though not as dramatic an uplift as adding it to the Extreme. Consider it the proverbial "cherry on top".
 
Last edited by a moderator:
Hi @austinpop - those are very valid questions. My primary role in the creation of the podcast is to choose the subject, and curate the relevant data sources to be used by the generative AI. Data sources and links used have been included in the “Sources and Credits” section of the video’s “Description” section. The general intent is to provide some kind of value to the audiophile community.

Once the data sources are provided, the AI entirely generates the content from these data sources, through some kind of proprietary meta-analysis and aggregation, for which the actual process, script used for speech generation, or output cannot be natively accessed to or modified. I give general direction to the generative AI e.g., ask for emphasis of matters, or choice of tone, which is typically a pretty iterative process. I also perform general quality control of the output.

It does appear the language models use carefully chosen language. If the phrases, discussions or references actually point to a specific individual in the data sources (something I wasn’t aware of), I will of course add this individual to the credits, with proper consent. @ray-dude
 
Last edited:
Hi @austinpop - those are very valid questions. My primary role in the creation of the podcast is to choose the subject, and curate the relevant data sources to be used by the generative AI. Data sources and links used have been included in the “Sources and Credits” section of the video’s “Description” section. The general intent is to provide some kind of value to the audiophile community.

Once the data sources are provided, the AI entirely generates the content from these data sources, through some kind of proprietary meta-analysis and aggregation, for which the actual process, script used for speech generation, or output cannot be natively accessed to or modified. I give general direction to the generative AI e.g., ask for emphasis of matters, or choice of tone for example, which is typically a pretty iterative process. I also perform general quality control of the output.

It does appear the language models use carefully chosen language. If the phrases, discussions or references actually point to a specific individual in the data sources (something I wasn’t aware of), I will of course add this individual to the credits, with proper consent. @ray-dude
Hello Hifickips,

I find the process fascinating. I’ve used AI in only minimal ways. The first time I used it (for an audio query of all things!) I found the response to be well below an expert human level. I then tried other model versions and was impressed with the responses to other more technical queries.

You can quibble with anything, but I was quite impressed with the results you obtained with what appears to be a significantly less effort than a traditional podcast production would have required.

It was informative, and presented the subject in a way that was easy to understand. And frankly, it was better than a number of other audio podcasts out there.

Thank you for posting it.

What AI software did you use? And how long did it take you to produce the result you posted?

Thanks
 
Last edited:
It is fascinating, what would be more fascinating is if it presented something that we didn't already know...
Hi John T,

When I used the qualifier fascinating, I meant the process of using AI for creating the podcast. Not necessarily the content.

Clearly, owners of the Olympus and participants in this thread are already very familiar with the Olympus and its capabilities. And agree with you that the podcast does not add significantly to a subject many of us have been following intently for some time.

But our Taiko world is relatively small. And while it may not be the purpose of the video, the podcast can possibly serve to increase awareness to others who know very little about XDMI and the significant advances that the Taiko team has accomplished.
 
Last edited:
Hi John T,

When I used the qualifier fascinating, I meant the process of using AI for creating the podcast. Not necessarily the content.

Clearly, owners of the Olympus and participants in this thread are already very familiar with the Olympus and its capabilities. And agree with you that the podcast does not add significantly to a subject many of us have been following intently for some time.

But our Taiko world is relatively small. And while it may not be the purpose of the video, the podcast can serve to increase awareness to others who know very little about XDMI and the significant advances that the Taiko team has accomplished.
I wasn't attempting to sound like a smart ass, I completely agree with you regarding the creation aspect, genuinely fascinating. It was helpful. I seriously think/believe AI will advance to the point of elaborating much deeper (kind of scary) than what it is fed. That was what I was alluding to...
 
  • Like
Reactions: cmarin
I would be interested in how this YouTube video was created because this is truly scary real
Your local AI expert chimes in here to say this is a Google product called Notebook LLM. It’s quite impressive but watch out for gotchas as it can make stuff up (these are called hallucinations in the world of LLMs — large language models). You can feed it any document and it creates this chatty duet. If I was still teaching, I would use it to enliven my lecture notes!
 
Your local AI expert chimes in here to say this is a Google product called Notebook LLM. It’s quite impressive but watch out for gotchas as it can make stuff up (these are called hallucinations in the world of LLMs — large language models). You can feed it any document and it creates this chatty duet. If I was still teaching, I would use it to enliven my lecture notes!
That's too wild, "hallucinations" in the world of Notebook LLMs regarding making stuff up! WOW! I take it the chatty duet is the standard way it presents itself in a podcast format?
 
Last edited:
Your local AI expert chimes in here to say this is a Google product called Notebook LLM. It’s quite impressive but watch out for gotchas as it can make stuff up (these are called hallucinations in the world of LLMs — large language models). You can feed it any document and it creates this chatty duet. If I was still teaching, I would use it to enliven my lecture notes!
Hi godofwealth,

Thank you for the Notebook LM reference.

Very cool program. I did a quick search and found a number of useful links including this useful tutorial in case anyone is interested.

 
Olympus XDMI and Sennheiser HE-1

As I mentioned earlier, I lent my Olympus XDMI to @onlychild for several days while I took a short trip to the Smoky Mountains (where, by the way, a black bear decided to join us on the cabin porch for dinner one night). You may be wondering why there hasn’t been any feedback on the Olympis XDMI yet. Let me explain.

@onlychild has the best headphone system I’ve ever seen or heard - Taiko Extreme with a dedicated router and switch, Sennheiser HE-1, all powered off-grid by a Stromtank S-5000, grounded by a Tripoint grounding system, HRS rack, Shunyata Omega cables, and more. It’s an incredibly resolving system.

As I was driving to Tennessee, I expected a call or text raving about the Olympus. But... nothing. No feedback on the first day, and none on the second day either. Strange! On the third day, I finally received a long message explaining that, while the Olympus was maybe 5-10% better than the Extreme, it wasn’t enough to justify the upgrade cost.

We had a long conversation discussing what he might have been hearing and why. One of the theories was that, since his entire system was already powered by the Stromtank and off-grid, he might not be experiencing the full benefits of the Olympus. But I didn’t buy that. The Stromtank’s batteries don’t sound as good as the Olympus’s. The Olympus BMS and GaN regulation are state-of-the-art, while the Stromtank uses an inverter to step up low-voltage battery output to 120V, only for components to step it down again later. So, technically, the Stromtank + Taiko Extreme combo has several disadvantages compared to the Olympus.

During our conversation, @onlychild mentioned something that piqued my interest: he thought the Olympus USB was more dynamic than the Olympus XDMI analog, which he described as softer. Bingo! That was the key. XDMI has extreme dynamics, crazy transients, and everything you’d want from your digital source. The only way to kill that is through additional digital conversion. I was now certain that the Sennheiser HE-1 was applying some kind of digital manipulation. A quick search revealed that the HE-1’s analog inputs are indeed converted to digital to apply DSP. Well, that explained a lot.

So, here’s what’s happening:
  1. Taiko Extreme USB → DSP → Sennheiser HE-1’s built-in DAC → analog;
  2. Taiko Olympus USB → DSP → Sennheiser HE-1’s built-in DAC → analog;
  3. Taiko Olympus XDMI → analog input on the Sennheiser HE-1→ analog converted to digital → DSP → Sennheiser HE-1’s built-in DAC → analog.
In comparison, #2 is clearly better than #1, but not by much. However, #3 sounded softer and less dynamic, which makes sense since the digital conversion inside the HE-1 was negating the benefits of the XDMI.

To clarify what's going on: XDMI is engineered to minimize digital conversions, eliminate USB, and offer extremely low latency. Meanwhile, the DSP inside the HE-1 is doing the opposite - converting the analog back to digital, applying DSP, and then converting it back to analog. So, the final DAC isn’t the XDMI analog - it’s the DAC inside the HE-1. Not only does this add more digital conversions, but it also introduces a significant amount of latency (though this is probably moot since we’re dealing with another DAC anyway).

In short, if you own a Sennheiser HE-1, you’re probably better off avoiding the XDMI analog. Look for the best digital source. Olympus USB will be an upgrade over Taiko Extreme USB, but you’re entering the realm of diminishing returns.

EDIT: For some reason, I cannot locate the source I had that confirmed that the HE-1 uses DSP. Planning to reach out to Sennheiser to confirm directly with the manufacturer when I have a chance. Until then I am not sure if everything above is entirely correct.
If you have more info on the HE-1 please let me know...

The text below was one of the references during my research, but I just can't find it anymore:
"The on-board DAC handles conversion of digital inputs and processes both analog and digital signals at 32-bit/384 kHz using Sennheiser’s internal DSP, optimizing the sound across all input sources before it reaches the amplification stage."
HE1 does not have digital processing. It is a system without DSP alteration.
 
That's too wild, "hallucinations" in the world of Notebook LLMs regarding making stuff up! WOW! I take it the chatty duet is the standard way it presents itself in a podcast format?
Large language models are trained on a huge corpus of text (and anything else like images or video), and at this point, the largest ones take hundreds of millions of dollars to train on massive AI supercomputers. Almost the entire corpus of textual data generated by humanity since the invention of writing 5000 years ago in Sumeria (ancient Iraq) has been gobbled up by these models, hundreds of trillions of tokens.

What they do is quite simple — given a context (say a passage of text or a conversation you’re having with a friend), they learn to predict the next token. The largest ones can deal with contexts of hundreds of megabytes, so you can feed it an entire novel and have it write a new chapter. Perhaps a new play by Shakespeare. Or a new piece of music like a symphony that Beethoven didn’t write.

It all sounds idyllic and scary, but it’s prone to making errors that are only caught by experts. What it generates is so convincing that you miss the errors. There’s an enormous amount of research going on in this area as the stakes could not be higher. Google has announced last week that Search will completely change in 2025 from being a bunch of links that are served up to AI output generated by an LLM, their Gemini model. Meta has their LLama model, Musk has his Grok model, and Anthropic has their Claude model etc.

Roughly half a trillion dollars is being spent each year in this high stakes battle to see who will control the next generation of AI generated content. The power required to train these models is so high that most large companies are bankrolling their own nuclear power plants. It’s the AI version of game of thrones (AI-GOT).
 
Large language models are trained on a huge corpus of text (and anything else like images or video), and at this point, the largest ones take hundreds of millions of dollars to train on massive AI supercomputers. Almost the entire corpus of textual data generated by humanity since the invention of writing 5000 years ago in Sumeria (ancient Iraq) has been gobbled up by these models, hundreds of trillions of tokens.

What they do is quite simple — given a context (say a passage of text or a conversation you’re having with a friend), they learn to predict the next token. The largest ones can deal with contexts of hundreds of megabytes, so you can feed it an entire novel and have it write a new chapter. Perhaps a new play by Shakespeare. Or a new piece of music like a symphony that Beethoven didn’t write.

It all sounds idyllic and scary, but it’s prone to making errors that are only caught by experts. What it generates is so convincing that you miss the errors. There’s an enormous amount of research going on in this area as the stakes could not be higher. Google has announced last week that Search will completely change in 2025 from being a bunch of links that are served up to AI output generated by an LLM, their Gemini model. Meta has their LLama model, Musk has his Grok model, and Anthropic has their Claude model etc.

Roughly half a trillion dollars is being spent each year in this high stakes battle to see who will control the next generation of AI generated content. The power required to train these models is so high that most large companies are bankrolling their own nuclear power plants. It’s the AI version of game of thrones (AI-GOT).
While this is all very interesting perhaps you all would be so kind as to move the AI discussion to its own thread for those interested in it.

Steve Z
 
Agreed! Here’s my explanation for why some Olympus owners who have used the analog DAC output prefer it to using an external DAC. Using an internal DAC means it’s running on the same clock as the rest of the media server. Using an external DAC requires clock recovery that are known to introduce subtle timing errors. While clock recovery is clearly solvable with high precision, perhaps there’s room for improvement. I assume the I/O module has clocking signals that lets it synchronize with the XDMI board on Horizon. Any comments?
 
  • Like
Reactions: Armsan and cmarin
HE1 does not have digital processing. It is a system without DSP alteration.
Very interesting. Can you share your source? Send me a DM if you prefer. I am very interested in this topic.

I haven't been able to confirm or deny this so far. Chat GPT was able to find an article explaining the DSP processing in the HE-1, but that entire section was then removed (and the rest was preserved). Waybackmachine did not have a snapshot of it either. It looked from far away like Sennheiser did not want to disclose some details. But I might be wrong of course, that's just how I interpreted it.
Questions to Sennheiser remain unanswered. Even a friend who recently visited the factory where the HE-1 is produced and spent several days there could not get anyone there to say YES or NO to DSP.

But the way the HE-1 sounds with the Olympus is exactly the way how additional digital processing sounds. In other words, most of the good work XDMI is doing seems to be reverted somehow.
 
Very interesting. Can you share your source? Send me a DM if you prefer. I am very interested in this topic.

I haven't been able to confirm or deny this so far. Chat GPT was able to find an article explaining the DSP processing in the HE-1, but that entire section was then removed (and the rest was preserved). Waybackmachine did not have a snapshot of it either. It looked from far away like Sennheiser did not want to disclose some details. But I might be wrong of course, that's just how I interpreted it.
Questions to Sennheiser remain unanswered. Even a friend who recently visited the factory where the HE-1 is produced and spent several days there could not get anyone there to say YES or NO to DSP.

But the way the HE-1 sounds with the Olympus is exactly the way how additional digital processing sounds. In other words, most of the good work XDMI is doing seems to be reverted somehow.
Hi nenon,

You had written back in September about comparing the Olympus XDMI analog with an MSB Select 2 DAC (without the XDMI MSB ProISL daughter card). Has that comparison been conducted yet with the XDMI MSB ProISL daughter card?

Thanks.
 
  • Like
Reactions: SwissTom
Hi nenon,

You had written back in September about comparing the Olympus XDMI analog with an MSB Select 2 DAC (without the XDMI MSB ProISL daughter card). Has that comparison been conducted yet with the XDMI MSB ProISL daughter card?

Thanks.

No, I don’t have an XDMI MSB ProISL daughter card yet (though I’ll get one with my second demo Olympus). Even when I do, arranging another long out-of-state road trip will require advance planning, so it might not be a top priority.

With the release of the MSB Cascade DAC, I’m currently more interested in hearing the Cascade with and without the XDMI MSB ProISL and next the XDMI analog.

Generally speaking, I’d like to personally experience as many of these comparisons as possible - it’s both fascinating and educational. It will be very cool to do the same with dCS Varese, Wadax, totalDAC, and other top shelf DACs out there. If any readers here can help facilitating that, please feel free to reach out.
 
No, I don’t have an XDMI MSB ProISL daughter card yet (though I’ll get one with my second demo Olympus). Even when I do, arranging another long out-of-state road trip will require advance planning, so it might not be a top priority.

With the release of the MSB Cascade DAC, I’m currently more interested in hearing the Cascade with and without the XDMI MSB ProISL and next the XDMI analog.

Generally speaking, I’d like to personally experience as many of these comparisons as possible - it’s both fascinating and educational. It will be very cool to do the same with dCS Varese, Wadax, totalDAC, and other top shelf DACs out there. If any readers here can help facilitating that, please feel free to reach out.
Come visit to hear the Lampi digital . Im still hoping Emile will be able to find time after he has caught up with all orders as that is his top priority right now
 
Hi @austinpop - those are very valid questions. My primary role in the creation of the podcast is to choose the subject, and curate the relevant data sources to be used by the generative AI. Data sources and links used have been included in the “Sources and Credits” section of the video’s “Description” section. The general intent is to provide some kind of value to the audiophile community.

Once the data sources are provided, the AI entirely generates the content from these data sources, through some kind of proprietary meta-analysis and aggregation, for which the actual process, script used for speech generation, or output cannot be natively accessed to or modified. I give general direction to the generative AI e.g., ask for emphasis of matters, or choice of tone, which is typically a pretty iterative process. I also perform general quality control of the output.

It does appear the language models use carefully chosen language. If the phrases, discussions or references actually point to a specific individual in the data sources (something I wasn’t aware of), I will of course add this individual to the credits, with proper consent. @ray-dude

Thank you for adding the attribution.

Perhaps I am more sensitive as a content creator (reviewer) myself. We are all going to have to decide how to publish content in a way that leverages the benefits of AI without cannibalizing the creator. After all, without “our sources” and “the research,” the AI has no content to use.

Anyway, this is OT for the Olympus forum, so I’ll leave it there.
 
Come visit to hear the Lampi digital . Im still hoping Emile will be able to find time after he has caught up with all orders as that is his top priority right now

Thank you for the invite. I am indeed planning to join when Emile visits.

Olympus XDMI to Horizon 360 vs. Olympus USB to Horizon 360 is of course interesting but I have received plenty of feedback on that.

XDMI Analog vs. XDMI to Horizon 360 is of course the test I am interested in, but that would be difficult to do without preparation (for example, a second I/O with XDMI analog, all properly burned-in, for relatively quick A/B switching). If we can't switch quickly enough between the two sources and have to do all the work to swap out daughter cards, it will be quite challenging for me to understand the differences, especially in a system I am not familiar with. I am sure both will be excellent with two different flavors, and in the end it will be down to personal preference.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu