Is the Future of Music a Chip in Your Brain? And how will Audiophilia be affected?

caesar

Well-Known Member
May 30, 2010
4,363
831
1,698
Interesting article from WSJ by the author of the award winning book of "how music got free", Stephen Witt.
(Great book, for the readers on this site:http://http://www.amazon.com/How-Music-Got-Free-Industry/dp/0525426612

THE YEAR IS 2040, and as you wait for a drone to deliver your pizza, you decide to throw on some tunes. Once a commodity bought and sold in stores, music is now an omnipresent utility invoked via spoken- word commands. In response to a simple “play,” an algorithmic DJ opens a blended set of songs, incorporating information about your location, your recent activities and your historical preferences—complemented by biofeedback from your implanted SmartChip. A calming set of lo-fi indie hits streams forth, while the algorithm adjusts the beats per minute and acoustic profile to the rain outside and the fact that you haven’t eaten for six hours.

The rise of such dynamically generated music is the story of the age. The album, that relic of the 20th century, is long dead. Even the concept of a “song” is starting to blur. Instead there are hooks, choruses, catchphrases and beats—a palette of musical elements that are mixed and matched on the fly by the computer, with occasional human assistance. Your life is scored like a movie, with swelling crescendos for the good parts, plaintive, atonal plunks for the bad, and fuzz-pedal guitar for the erotic. The DJ’s ability to read your emotional state approaches clairvoyance. But the developers discourage the name “artificial intelligence” to describe such technology. They prefer the term “mood-affiliated procedural remixing.”

Right now, the mood is hunger. You’ve put on weight lately, as your refrigerator keeps reminding you. With its assistance—and the collaboration of your DJ—you’ve come up with a comprehensive plan for diet and exercise, along with the attendant soundtrack. Already, you’ve lost six pounds. Although you sometimes worry that the machines are running your life, it’s not exactly a dystopian experience—the other day, after a fast- paced dubstep remix spurred you to a personal best on your daily run through the park, you burst into tears of joy.

Cultural production was long thought to be an impregnable stronghold of human intelligence, the one thing the machines could never do better than humans. But a few maverick researchers persisted, and—aided by startling, asymptotic advances in other areas of machine learning—suddenly, one day, they could. To be a musician now is to be an arranger. To be a songwriter is to code. Atlanta, the birthplace of “trap” music, is now a locus of brogrammer culture. Nashville is a leading technology incubator. The Capitol Records tower was converted to condos after the label uploaded its executive suite to the cloud.

Acoustical engineering has made progress too—it now permits the digital resurrection of long-dead voices from the past. Frank Sinatra has had several recent hits, and Whitney Houston is topping the charts with new material. (Her “lyrics” are aggregated from the social-media feeds of teenage girls, and include the world’s first “vocalized emoji.”)

Limited versions of this technology have been made available for home production. Your own contribution was a novelty remix of “...Baby One More Time,” as sung by Etta James. It was a minor sensation that garnered more than 1,000 likes on BrainShare. The liner notes for this kitschy collaboration were drafted automatically, and listed 247 separate individuals, including prioritized rights holders who were awarded the royalties from advertising sold against the song. You got paid nothing, and were credited last, as “curator.”

The recording industry, as it anachronistically continues to call itself, was nearly bankrupted by digital piracy at the beginning of this millennium. That prompted a shift in thinking. Noting the growth of the adjacent videogame industry, forward-thinking executives began to adopt its business model, moving away from unit sales and toward the drip-feed revenue model of continuous updates organized around verified purchases. A resurgence of profitability followed, although a significant portion of the spoils went to the software developers, who have begun to exhibit their own offbeat version of classic music-business decadence. (A recent profile of the 23-year-old Harvard graduate behind a key hook-selection algorithm revealed the little punk just spent $100 million to buy 11 acres of undeveloped land in the Pacific Palisades, on which he planned to site his 250-square-foot portable micro-home.)

There’s pushback, of course. A viral meme of “The Terminator” in a DJ booth briefly trended on social media before being quarantined to 4chan by the Centers for Disease Control and Prevention. A collective of music critics signed an open letter to the industry accusing it of putting commerce ahead of art. (Many later found work as advertising copywriters.) Most surprising of all, a burgeoning community of upscale youngsters are returning to older modes of production. These “new musicians” insist on playing their actual instruments into studio microphones, then mastering “finished” versions of their individual songs. The throwback tunes are then distributed, in charming antiquarian fashion, through a reconstructed version of the Napster file-sharing service, which one accesses through—get this—a computer terminal.

The doorbell announces the arrival of your food. Literally: “Your dinner’s here,” it says. About time; it’s been seven minutes since you ordered. As you begin to eat, the DJ lowers the volume and recalibrates to an ambient register. The food, like the music, is a little bland—your orders are routed through your refrigerator, which is monitoring your sodium intake—but the meal is adequately filling. Once finished, you decide it’s time to go off-grid.

You utter the command “manual,” and your digital servants come to a stop. The effect is not entirely unlike an electricity blackout, and you panic, for a moment, at the prospect of making an unassisted choice. The first thing that comes to mind is an oldie: “Anaconda,” by Nicki Minaj. (The rapper’s oeuvre is experiencing an ironic resurgence in popularity following her election to the presidency.) As the song begins to play, you permit yourself a nostalgic indulgence.

The 2010s. With some embarrassment, you recall the regrettable years you spent pawing at your cellphone—back then, people still conceived of the Internet as somehow separate from “real life.” The roots of the transformation in music can be traced to that decade, although the technology was clumsy in its infancy. Seeking to differentiate their products in the streaming wars, Google and Apple (and Facebook, after it bought Spotify) spent hundreds of millions of dollars acquiring and developing rudimentary song-selection technology that was, for the longest time, a colossal bust. You never got the song you wanted, voice commands barely worked, and the term “DJ” referred to some French guy in a mask who wasn’t even monitoring your serotonin levels.

But then the convergence happened, and, as the old album title had it, “Nothing Was the Same.” Life without procedural remixing is tedious—two minutes into “Anaconda” and you’re already bored of the song. Your SmartChip registers the signature biochemicals of disappointment, but you refuse to revert to automatic mode. Listening to static music made purely by humans is an inferior experience, sure, but you feel it’s important to unplug from the AI every few days. You’re in charge here, right?
 
The return of cyber punk sci-fi. I like it :)
 
:D

We audiophiles will find a way to find one chip superior to the other because it uses cryo treated platinum rather than simple gold.. You will also see special tin foil hats for protection against EMI and RFI and there wil be a slew of companies making better "tendrils"-type cable for attachment to the cortex.. There will be discussion of double-blind experiment from some brave souls with chips of unknown origin and brand placed in their heads ... And of course based on the grade of silicon there will be larger soundstage ...
 
Chip rolling!
 
At the very least, pictures on Google of wind-up gramophones, factories making valves, and gigantic horns sprouting out of the roofs of houses from the 1930s, will fulfil our nostalgic needs.
 
:D

We audiophiles will find a way to find one chip superior to the other because it uses cryo treated platinum rather than simple gold.. You will also see special tin foil hats for protection against EMI and RFI and there wil be a slew of companies making better "tendrils"-type cable for attachment to the cortex.. There will be discussion of double-blind experiment from some brave souls with chips of unknown origin and brand placed in their heads ... And of course based on the grade of silicon there will be larger soundstage ...

;)

And it will never sound as musical as vinyl.

Tim
 
Why one chip when you can use 6 for 3 way active?
 
Someone asked me years ago on a forum the future of movies 100 years from now, I gave them the same answer per article:

"Acoustical engineering has made progress too—it now permits the digital resurrection of long-dead voices from the past. Frank Sinatra has had several recent hits, and Whitney Houston is topping the charts with new material. (Her “lyrics” are aggregated from the social-media feeds of teenage girls, and include the world’s first “vocalized emoji.”)"

He was saddened by my answer :). Our ability to recreate virtual images of talent and voices will be perfected by then if not much earlier. We can then create are with full reality as opposed to what Pixar does with cartoon characters. Now if anyone wants to act for real they can, much like we can produce black and white movies. But I think that is the future. Creative talent will be free of having to extract certain performance out of the talent. They can wish it and have it happen.

As to the chip, it won't go in for this purpose. It may however go in to treat all manner of mental disorders. I am unclear, certainly by 2040, whether we will uncover enough about brain's operation to give it true sensation of music.
 
I dont know about a chip , but i tried the new samsung 6 cell Phone plus VR glasses this weekend and that thing is really cool , i expirienced the roller coaster and a circusact , wow really cool while you sit on a turnchair you can swing around and have the feeling you are fully emersed in virtual reality , that really has the future imo , imagine a VR concerthall expirience which could be integrated with the high end system , the graphics could have been a bit better but thatll be solved in the future as well i think.

The circusact was based on a real performance , they seem to be able to make a VR recording of a real event
 
Someone asked me years ago on a forum the future of movies 100 years from now, I gave them the same answer per article:

"Acoustical engineering has made progress too—it now permits the digital resurrection of long-dead voices from the past. Frank Sinatra has had several recent hits, and Whitney Houston is topping the charts with new material. (Her “lyrics” are aggregated from the social-media feeds of teenage girls, and include the world’s first “vocalized emoji.”)"

He was saddened by my answer :). Our ability to recreate virtual images of talent and voices will be perfected by then if not much earlier. We can then create are with full reality as opposed to what Pixar does with cartoon characters. Now if anyone wants to act for real they can, much like we can produce black and white movies. But I think that is the future. Creative talent will be free of having to extract certain performance out of the talent. They can wish it and have it happen.

As to the chip, it won't go in for this purpose. It may however go in to treat all manner of mental disorders. I am unclear, certainly by 2040, whether we will uncover enough about brain's operation to give it true sensation of music.

I'm saddened by that answer. Movies will be less without great actors.

Tim
 
Someone asked me years ago on a forum the future of movies 100 years from now, I gave them the same answer per article:

"Acoustical engineering has made progress too—it now permits the digital resurrection of long-dead voices from the past. Frank Sinatra has had several recent hits, and Whitney Houston is topping the charts with new material. (Her “lyrics” are aggregated from the social-media feeds of teenage girls, and include the world’s first “vocalized emoji.”)"

He was saddened by my answer :). Our ability to recreate virtual images of talent and voices will be perfected by then if not much earlier. We can then create are with full reality as opposed to what Pixar does with cartoon characters. Now if anyone wants to act for real they can, much like we can produce black and white movies. But I think that is the future. Creative talent will be free of having to extract certain performance out of the talent. They can wish it and have it happen.

As to the chip, it won't go in for this purpose. It may however go in to treat all manner of mental disorders. I am unclear, certainly by 2040, whether we will uncover enough about brain's operation to give it true sensation of music.

Are we there yet?

On the 9th of March, 2010, a sold-out concert was held in Japan. Instead of a real singer, the lead singer was a computer generated hologram, and her voice was synthesized using a program called Vocaloid. Japan continues to move forward into the future, whether or not the rest of the world is on board.

 
That's pretty neat Ray. The movements are remarkably real. It will take a few years but we will get to photo realistic versions of the same.

We can see the positives that this brings. Being able to have a character dance in perfect movements as created by the talent behind the scene. No more limited by what someone can learn or do on stage.
 
What!!?! No more shark fin soup....!!!?!
 
That's pretty neat Ray. The movements are remarkably real. It will take a few years but we will get to photo realistic versions of the same.

We can see the positives that this brings. Being able to have a character dance in perfect movements as created by the talent behind the scene. No more limited by what someone can learn or do on stage.

I suppose their is a limit to the fascination. A contortionist will eventually be more impressive than a real T-1000, that is if the the T-1000 becomes commonplace and is not out to kill you!
 
The Terminator scenario is here and now. I go to Youtube and search for something. The search algorithms at Google decided what choices are offered to me. And the video after the one I am watching.

Likewise when I search for some text, it is a computer that is directing me to a web site and another computer interfaces with me to buy something.

We are essentially steered by Google computers for a lot of what we do. So "Skynet" is here but called Google :).
 
"Genysis is Skynet" ........ but the movie flopped. Yeah, looked like a big swipe at either Google and Apple depending on what side of the fence one's on.
 
The opportunity to spam is stunning....subliminal messages direct to brain and a bit of hormone tweaking .. and all of a sardine you have a craving for a 1/4 pounder
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu