Can You Believe This-The Government Wants Us To Go EV but In So Doing They Will impose a gas surcharge

Dave I don’t have the answer to predict the outcomes, but that’s the whole issue with our traditionally acting with a lack of precaution. Our history is littered with the evidence that not everything we can do is something we should do. The solutions to our future aren’t just technological.

There is no evidence that AI will have the capacity for either wisdom or compassion that could enable it to evolve into something that is deeply aware and non destructive. If we can’t make ourselves this what hope do we have of making something else become this. I believe the test is if we can first achieve wisdom and compassion ourselves and define that as a part of a model of consciousness.

I believe we have the means to solve all we need for us to evolve as a species into our potential and I believe even survive the current challenges. Making beyond risky decisions chasing an easy fix isn’t something we should do if we don’t have reasonable control over the outcomes. This might just be the test of our wisdom.
 
Last edited:
  • Like
Reactions: Steve williams
Dave I don’t have the answer to predict the outcomes, but that’s the whole issue with our traditionally acting with a lack of precaution. Our history is littered with the evidence that not everything we can do is something we should do. The solutions to our future aren’t just technological.

There is no evidence that AI will have the capacity for either wisdom or compassion that could enable it to evolve into something that is deeply aware and non destructive. If we can’t make ourselves this what hope do we have of making something else become this. I believe the test is if we can first achieve wisdom and compassion ourselves and define that as a part of a model of consciousness.

I believe we have the means to solve all we need for us to evolve as a species into our potential and I believe even survive the current challenges. Making beyond risky decisions chasing an easy fix isn’t something we should do if we don’t have reasonable control over the outcomes. This might just be the test of our wisdom.


All we can do is have good intent, we never have control of the outcome. If you want to wait for society to become enlightened before allowing AIs to exist the time may never come.

Also, IDK about AI being an "easy fix", that seems like a strange view. I also never said solutions have to be completely technological, not sure why you assume that. I have clearly said in past posts the solution lies in changing how we live on a societal level. Technology is only part of that.

IMO, handicapping our ability to develop technology because of fear is simply wrong. Limitations based on logic and wisdom, sure... but a general fear of AI is not enough.
 
Don’t read it. However, it is getting interesting. Personally, I don’t think you can control AI. Once it starts thinking on its own then it is on its own.
 
I'm not an AI expert but I can still say this has gone to straight SciFi, fantasy land.
 
I'm not an AI expert but I can still say this has gone to straight SciFi, fantasy land.

Not really. This is just basic computer science. Knowledge growth is exponential. We have turned the curve, and are now on the up slope.
 
  • Like
Reactions: Steve williams
All we can do is have good intent, we never have control of the outcome. If you want to wait for society to become enlightened before allowing AIs to exist the time may never come.

Also, IDK about AI being an "easy fix", that seems like a strange view. I also never said solutions have to be completely technological, not sure why you assume that. I have clearly said in past posts the solution lies in changing how we live on a societal level. Technology is only part of that.

IMO, handicapping our ability to develop technology because of fear is simply wrong. Limitations based on logic and wisdom, sure... but a general fear of AI is not enough.
We both see very similar on much to do with the need for change here often but also I do believe we are at a crossroads point in civilisation and the need for a more transformational inner social change may even be the core to finding lasting longer term solutions. Not suggesting that anything is impossible and we have to evaluate everything but rather I do believe precautionary principle is one of the essential keys to ecologically sustainable development.
 
Last edited:
This thread is really blooming but has spurred great interest.
Without calling people idiots or degrading the thread lets please keep it interesting without any of the name calling

TBH everything I have read seems to suggest that by 2050 the number of "bots" (AI) will outnumber humans and I do agree that once AI thinks on its own, that is the time mankind will hopefully benefit.

Im hoping we can get this thread back on topic

As I have said, I was surprised that in the past 4 days in NJ and NYC that I saw but a handful of Teslas
 
Don’t read it. However, it is getting interesting. Personally, I don’t think you can control AI. Once it starts thinking on its own then it is on its own.

There is no reason to believe that AI will ever think on its own, let alone gain consciousness. AI is just computer algorithms, which have no insight. Yet thinking and insight are intrinsically linked. A computer simply doesn't know and understand what it's doing, the understanding of a computer's computations is up to its human users.

And the other way around, claiming that our minds are nothing more than complex computers is silly. Again, a computer has no insight. Yes, it can make 'decisions', but those are not based on any insight, but on algorithms. Sure, a computer can even learn to play a game and find sophisticated solutions, but so far all that computers did in that respect is arriving at such solutions not by insight, but by stumbling upon them in the course of countless (millions) of trials -- exactly as you would expect from algorithms that do not think by themselves.
 
  • Like
Reactions: ack
Or perhaps to put it differently, a computer is able to mask its stupidity by its incredible speed. Prime example: chess computers.
 
  • Like
Reactions: the sound of Tao
Sometimes the cup is half full and sometimes it is only half empty. Take your pick but there are arguments to be made on both sides

It just seems to me that what we read as science fiction in the 1970's is rapidly becoming science fact in the 21st century

To say there is no reason to believe that AI will think on it's own is what I would have thought in the 1970's but where man kind is now,,,,not at all

I remember seeing 2001 for the first time and was amazed. Now however there is no reason to think that it won't happen.

Our favorite roving reporter "Bob" a few days ago posted a You Tube link where it is predicted that mankind and bots by 2050 will get it on sexually

To me I tend be on the side where I feel this is not only going to happen but will it happen sooner rather than later

So yes, folsom, for now it is science fiction but it doesn't mean that it won't become science fact. But heck I have no crystal ball and my opinion is no more predictive than any other .
 
  • Like
Reactions: the sound of Tao
Well, electric cars are the future today; they are like computers, run by computers, upgradeable, ...Artificial Intelligence.

I like this romantic movie starring Joaquin Phoenix ...

Highly recommended for the ones who did not see it.
Fifty years into the future we (the majority of us, by 2069) are going to miss a hell of a ride ...

* Steve made a reference to this: https://www.whatsbestforum.com/threads/mind-twister.29104/#post-600866
 
Last edited:
Or perhaps to put it differently, a computer is able to mask its stupidity by its incredible speed. Prime example: chess computers.
I think the speed we are expected to respond to things today sets us up for failure at times. I have empathy for that poor chess computer.

Also if we imbued them with AI there’s perhaps the challenge of just too much being stuck in the mind like Marvin the depressed android from Hitchhikers Guide to the Galaxy...
 
Last edited:
Sometimes the cup is half full and sometimes it is only half empty. Take your pick but there are arguments to be made on both sides

It just seems to me that what we read as science fiction in the 1970's is rapidly becoming science fact in the 21st century

To say there is no reason to believe that AI will think on it's own is what I would have thought in the 1970's but where man kind is now,,,,not at all

I remember seeing 2001 for the first time and was amazed. Now however there is no reason to think that it won't happen.

Our favorite roving reporter "Bob" a few days ago posted a You Tube link where it is predicted that mankind and bots by 2050 will get it on sexually

To me I tend be on the side where I feel this is not only going to happen but will it happen sooner rather than later

So yes, folsom, for now it is science fiction but it doesn't mean that it won't become science fact. But heck I have no crystal ball and my opinion is no more predictive than any other .

You may think there is fundamental progress, but I think the term "intelligence" in "artificial intelligence" is just an illusion. I don't see *in principle* how mere algorithms can ever rise to level of true insight, which is essential to intelligence and actual thinking.

Yes, progress in science and knowledge has often been fast, but there are some things that are *in principle* not possible, regardless of the progress of science. For example, it is in principle impossible to create a perpetuum mobile, a perpetual motion machine, and it is in principle impossible to directly observe another universe, required to prove any so-called multiverse. The former would require breaking the laws of thermodynamics (no laws of nature have ever been broken by science), the latter that we could somehow find a way to have information travel to us much faster than the speed of light, which is physically impossible (of course you can start talking about 'worm holes', but that is science fiction that I don't take seriously).
 
Last edited:
There is no reason to believe that AI will ever think on its own, let alone gain consciousness. AI is just computer algorithms, which have no insight. Yet thinking and insight are intrinsically linked. A computer simply doesn't know and understand what it's doing, the understanding of a computer's computations is up to its human users.

And the other way around, claiming that our minds are nothing more than complex computers is silly. Again, a computer has no insight. Yes, it can make 'decisions', but those are not based on any insight, but on algorithms. Sure, a computer can even learn to play a game and find sophisticated solutions, but so far all that computers did in that respect is arriving at such solutions not by insight, but by stumbling upon them in the course of countless (millions) of trials -- exactly as you would expect from algorithms that do not think by themselves.

Sure there is. If you are computer scientist, you know that silicon lifeforms or similar are our future. If they know how to recharge themselves and they know how to replicate themselves and they care about self-preservation, that is all that is needed for a new lifeform.
 
Deleted by admin

keep politics out of it Marc. You constantly find a way to do that
 
Last edited by a moderator:
If they know how to recharge themselves and they know how to replicate themselves and they care about self-preservation, that is all that is needed for a new lifeform.

"Replicate themselves", "care about self-preservation" is exactly the kind of dopy mumbo jumbo that is so popular to talk about these days, but is just crazed unhinged science fiction.

For those people who are afraid that AI will once take over the world and destroy us, I have a simple three-word answer: Pull The Plug.
 
Google's DeepMind AI Beats Humans Again---This Time By Deciphering Ancient Greek Text

? https://www.ancient-origins.net/news-history-archaeology/greek-texts-0012762

greek-text.jpg


I'm just doing my job...audiophile "roving reporter" :)
 
AI will eventually confirm Buddhist philosophy as correct imo... QM certainly has a lot of parallels with the Buddhist view of impermanence and how phenomenon arises.

This is part of the reason I'm not so concerned about AI killing us all. I think Buddhist philosophy seem obvious and logical, if the machines concur then their main goal will be the same as the Buddha's goal: the enlightenment of all sentient beings.

I also think AI will quickly pass the threshold where it can be considered sentient, we have to remember it's abilities to process information will grow exponentially, and I'm not sure we can imagine exactly how that will work out. The human brain certainly has it's limitations, yet we think we're sentient. We just don't act like it! ;)

I think we need AI to solve complex problems and eventually we'll be able to control climate and weather to a degree.
 

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu