Can You Believe This-The Government Wants Us To Go EV but In So Doing They Will impose a gas surcharge

Google's DeepMind AI Beats Humans Again---This Time By Deciphering Ancient Greek Text

? https://www.ancient-origins.net/news-history-archaeology/greek-texts-0012762

greek-text.jpg


I'm just doing my job...audiophile "roving reporter" :)

Great stuff, thanks.

However, the article gives away the essential: the computer only gives suggestions -- of course, it's still just an unthinking algorithm. Yet the human historian still needs to select the right answer, based on his experience and knowledge -- if course, he is the one who actually has insight, who thinks.

That the computer is better at the task in some ways is due to its incredible speed and storage capacity, the latter of which, unlike humans, makes no mistakes. Yet as I said before, the computer is able to hide its inherent stupidity behind these capabilities. They don't magically make it "intelligent".

The article again makes the fundamental mistake of claiming that the computer "understands" better. This is gobbledygook. A computer doesn't understand anything, no matter how fast it is and how many facts it can store.
 
Sure there is. If you are computer scientist, you know that silicon lifeforms or similar are our future. If they know how to recharge themselves and they know how to replicate themselves and they care about self-preservation, that is all that is needed for a new lifeform.

A computer scientist, a bioinformatician, in the company I work for agrees with me that computers are dumb as a brick. He always complains that it takes so much time to program into a computer something that is so easy to see for humans in an instant -- by raw insight. You can only laboriously mimic human insight with computer algorithms, but the mimicry just isn't the real thing.
 
Last edited:
A computer scientist, a bioinformatician, in the company I work for agrees with me that computers are dumb as a brick. He always complains that it takes so much time to program into a computer something that is so easy to see for humans in an instant -- by raw insight. You can only laboriously mimic human insight with computer algorithms, but the mimicry just isn't the real thing.


as I said yesterday, "is the glass half full or is it half empty?" Sure it is Science fiction however I lean towards DaveC's comment above.

2050 is when I read
 
I read this a few months ago. a fascinating read by Dan Brown , one of my favorite authors. This is a story about "where we came from and where we are heading" I couldn't put it down

51Z0WWdD7CL._SY498_BO1,204,203,200_.jpg
 
  • Like
Reactions: the sound of Tao
Thanks for the segue Steve on origin. It gets back to the notion of the greater timeframes. My concern in all this isn’t just about the development of AI as one of the potential steps to increase our capability to run the planet. Rather the bigger question is what is our philosophy or concept of how the planet should actually run.

If we consider that the development of Hominin through to Homo Sapien is a timeline of some seven million years then what is for us a reasonable timeline for a framework for modelling any significantly different future development.

The kinds of longer term planning we put in place now is typically just in a timeframe of some 20 to 30 years. I’d suggest if we want to put some realistically survivable plan in at this point we need to consider starting planning and projecting for much, much longer periods.

We are at the Earth’s limits. We are clearly at a stepping off point. Big picture questions abound. Like do we move towards creating technology to maintain a liveable climate and accept a future that may always be reliant on a potentially increasingly synthetic and component based and manufactured approach to managing nature. This may be exactly what we need to do at this point. But is it what we should aim for in the longer term? Over a greater period do we essentially change the nature of human activity and get back to a point (perhaps over a couple of hundred years) to get greenhouse inventory down to sustainable levels where we can get human activity into a framework that fits the natural patterns of the earth system and aim to give the control of the operation of the climate back to much slower natural processes and systems of Earth. Essentially through time give ourselves back over to nature.

So while our current understanding of the more monumental choices in the essential concept in approach to terrestrial management needs to be reviewed and also then what significantly new technology do we develop and what older technologies do we let go of. Possibly things like the notion of wide spread private transport just won’t survive even the next 30 to 50 years.

Given our current maxed out capacities of what the earth can bare I’d believe we should always be considering well beyond just a generation... and even 50 to 100 years is really next to nothing in the timeframe of future development of the planet and even our human species.

So do we develop AI to eventually help us make those kinds of projections? Perhaps an AI with a 100 or 200 years of (self) development may be well beyond our imagining and be exactly that kind of salvation... or perhaps the outcome won’t even be about our line and if Hominin dies out then do we want to leave any remnant technology that could over the greater time periods develop into some kind of sentience in the form of some self sustaining and ongoing AI technology (if it develops to that level over many hundreds of years or even longer) if we now set it in motion to develop and ascend in consciousness. Should we set limits before we do things. I just believe these are the questions we should be asking ourselves before we implement things.

Perhaps should we consider only being an interim earth manager and ultimately aim to give the earth a chance to right itself over the millennia and become what is it’s true nature, a natural system. We can’t know exactly where we will be and what our technology will become in a hundred years but perhaps we can at least say what our intentions are as a species.

I’m not suggesting we can have the answers or we should just arbitrarily fear things and rule out potential solutions but I do believe some increased precaution at this point is to atleast have open discussion up through all levels of society and across all government and discuss longer term planning and risk assessment and how to navigate forwards and whether we continue increasing our human centred process to control the planet or whether we aim to move towards learning how to apply more and more natural processes to design and problem solving and perhaps becoming more true to our true nature. All our origins are in nature and perhaps that is again where we need return to find our place in the future.
 
Last edited:
Thanks for the segue Steve on origin. It gets back to the notion of the greater timeframes. My concern in all this isn’t just about the development of AI as one of the potential steps to increase our capability to run the planet. Rather the bigger question is what is our philosophy or concept of how the planet should actually run.

If we consider that the development of Hominin through to Homo Sapien is a timeline of some seven million years then what is for us a reasonable timeline for a framework for modelling any significantly different future development.

The kinds of longer term planning we put in place now is typically just in a timeframe of some 20 to 30 years. I’d suggest if we want to put some realistically survivable plan in at this point we need to consider starting planning and projecting for much, much longer periods.

We are at the Earth’s limits. We are clearly at a stepping off point. Big picture questions abound. Like do we move towards creating technology to maintain a liveable climate and accept a future that may always be reliant on a potentially increasingly synthetic and component based and manufactured approach to managing nature. This may be exactly what we need to do at this point. But is it what we should aim for in the longer term? Over a greater period do we essentially change the nature of human activity and get back to a point (perhaps over a couple of hundred years) to get greenhouse inventory down to sustainable levels where we can get human activity into a framework that fits the natural patterns of the earth system and aim to give the control of the operation of the climate back to much slower natural processes and systems of Earth. Essentially through time give ourselves back over to nature.

So while our current understanding of the more monumental choices in the essential concept in approach to terrestrial management needs to be reviewed and also then what significantly new technology do we develop and what older technologies do we let go of. Possibly things like the notion of wide spread private transport just won’t survive even the next 30 to 50 years.

Given our current maxed out capacities of what the earth can bare I’d believe we should always be considering well beyond just a generation... and even 50 to 100 years is really next to nothing in the timeframe of future development of the planet and even our human species.

So do we develop AI to eventually help us make those kinds of projections? Perhaps an AI with a 100 or 200 years of (self) development may be well beyond our imagining and be exactly that kind of salvation... or perhaps the outcome won’t even be about our line and if Hominin dies out then do we want to leave any remnant technology that could over the greater time periods develop into some kind of sentience in the form of some self sustaining and ongoing AI technology (if it develops to that level over many hundreds of years or even longer) if we now set it in motion to develop and ascend in consciousness. Should we set limits before we do things. I just believe these are the questions we should be asking ourselves before we implement things.

Perhaps should we consider only being an interim earth manager and ultimately aim to give the earth a chance to right itself over the millennia and become what is it’s true nature, a natural system. We can’t know exactly where we will be and what our technology will become in a hundred years but perhaps we can at least say what our intentions are as a species.

I’m not suggesting we can have the answers or we should just arbitrarily fear things and rule out potential solutions but I do believe some increased precaution at this point is to atleast have open discussion up through all levels of society and across all government and discuss longer term planning and risk assessment and how to navigate forwards and whether we continue increasing our human centred process to control the planet or whether we aim to move towards learning how to apply more and more natural processes to design and problem solving and perhaps becoming more true to our true nature. All our origins are in nature and perhaps that is again where we need return to find our place in the future.

The problem is human nature. The only thing that seems to overcome greed, short-sightedness and self-interest is organized government, and particularly uncorrupt government by intelligent scholars based on Constitutions that benefit all of the people. Only this can counter the propensity for self-destruction IMO. There is less and less of this evident on the planet. A worrisome trend.
 
The problem is human nature. The only thing that seems to overcome greed, short-sightedness and self-interest is organized government, and particularly uncorrupt government by intelligent scholars based on Constitutions that benefit all of the people. Only this can counter the propensity for self-destruction IMO. There is less and less of this evident on the planet. A worrisome trend.
I suppose the point becomes that earth timeframes are massive and feedback to change non linear so if we need to manage the Earth what level of not just capacity but also maturity does that require of us? If in the unspoilt and infinite garden we can be children and play and experiment without end but since now we have hit the earth’s limits we just have to start a new phase of Hominin activity that needs to be more wise and adult. As our species moves into a later senescence and retirement we perhaps also need to act as mentors for the Earth’s future and not just our own. So yes, we do need to grow up rather quickly.

Dave’s point about us needing to become more Buddhist in our approach is very sage and wise. We do need better more cohesive philosophical and moral frameworks for being... I’d extend that notion that there is much to learn from all of philosophy and all of history. We need to get a concept together of who we need to become to survive.
 
Last edited:
I suppose the point becomes that earth timeframes are massive and feedback to change non linear so if we need to manage the Earth what level of not just capacity but also maturity does that require of us? If in the unspoilt and infinite garden we can be children and play and experiment without end but now we have hit the earth’s limits we have to start a new phase of Hominin activity that needs to be more wise and adult. As our species moves into a later senescence and retirement we perhaps need to also act as mentors for the Earth’s future and not just our own. So yes, we do need to grow up rather quickly.

If only 70% of humans were this smart. Maybe it will take mass die-offs of the less fortunate and less intelligent to get there. I feel bad for the less fortunate that will be caught-up in this folly.
 
If only 70% of humans were this smart. Maybe it will take mass die-offs of the less fortunate and less intelligent to get there. I feel bad for the less fortunate that will be caught-up in this folly.
Perhaps individual intelligence won’t of itself necessarily be the governing factor in any survival. Maybe group consciousness will be the decider. At this point all are vulnerable and the most vulnerable could just as easily have a kind of intelligence but not the other qualities necessary for survival. Emotional maturity is also an essential for survival.

Decreasing what is above to benefit what is below is a Taoist approach. Being in the centre and in our nature is the safest place for all.
 
Last edited:
To be honest, I am amazed at how much progress we have made in any area when you consider how apparently ignorant the majority of the population is.
 
To be honest, I am amazed at how much progress we have made in any area when you consider how apparently ignorant the majority of the population is.
Hi Bud,
I wasn’t suggesting that people are at all ignorant but rather that it’s more about people seeing the need to let go of the differences and self interest and bickering and in-fighting and in us getting on with working together across boundaries to be able to move ahead and collaboratively plan on a needed global outcome.

I’m not sure that as a species we need any more intelligence at all but rather we do need to develop the emotional maturity as a civilisation to see that for the sake of survival we need to collaborate on and to find solutions to give us a better chance to go forwards as a civilisation. There’s no shortage of intelligence or educated people on the planet at all but we need to be now more wise than ever and not just very clever.

I do believe as a species we are more stressed and live more in our heads than at any time before but our instinct for survival is not yet kicking in strongly enough to challenge us to act cooperatively to create viable outcome.

Activism and conflict are other possibilities for us but the track record on conflict in history is that it is ridiculously resource intensive and invariably creates lots of destruction and associated carbon and that it usually also doesn’t create any lasting sustainable solutions so we may just be stuck with growing upwards as a collective species and finding lasting collaborative solutions.
 
Last edited:
Gee, I am old enough to remember computers in the 80’s, the first mobile phones, the mid 90’s internet. Look at where we have come in a rather short time. The mind boggles what will be possible by 2030 let alone 2050. Sentient AI is not only possible but likely. As for just pulling the plug, maybe computers Will be able to figure out how to power themselves. Zero point energy comes to mind, pulling unlimited amounts of energy out of the quantum field.
 
Great stuff, thanks.

However, the article gives away the essential: the computer only gives suggestions -- of course, it's still just an unthinking algorithm. Yet the human historian still needs to select the right answer, based on his experience and knowledge -- if course, he is the one who actually has insight, who thinks.

That the computer is better at the task in some ways is due to its incredible speed and storage capacity, the latter of which, unlike humans, makes no mistakes. Yet as I said before, the computer is able to hide its inherent stupidity behind these capabilities. They don't magically make it "intelligent".

The article again makes the fundamental mistake of claiming that the computer "understands" better. This is gobbledygook. A computer doesn't understand anything, no matter how fast it is and how many facts it can store.


What do you think of recent studies that claim human free will is an illusion, that we act automatically according to our ingrained belief systems with little to no deviation? At what point do we consider activity "thinking"?
 
Going electric is best moving forward, and underground electric power lines are coming up more and more ... the climate dictates it.

We have underground lines here, but the long feeds are above ground so there are occasional outages. Unfortunately, the high tension wires have to be elevated. The problem in CA is old infrastructure that that was maintained or upgraded.
 
We have underground lines here, but the long feeds are above ground so there are occasional outages. Unfortunately, the high tension wires have to be elevated. The problem in CA is old infrastructure that that was maintained or upgraded.

Good Sunday Steve,

It reminds me of the 1998 ice storm in Quebec and upstate New York...the high tension elevated power lines came crashing down from the weight of the ice. That was an exceptional winter.

Just yesterday I was reading about the winds, the Northern California fires near Los Angeles, the power cut off to many customers, and some fires were started from power lines...again. Few structures and homes went up in smoke, and acres of bushes and forests.

The cost in human lives, properties, insurance companies, electric companies keeps climbing every year. There's an urgent need to change things. My best guess is that in some areas underground power lines (even expensive as it is) are one of the best solutions.
...Also, land management.
 
Just yesterday I was reading about the winds, the Northern California fires near Los Angeles, the power cut off to many customers, and some fires were started from power lines...again. Few structures and homes went up in smoke, and acres of bushes and forests.

Really bad for PGE. the Governor of CA is already pissed with PGE for not making the investments and now you seem to have a high-tension failure caught on video starting a fire.
 
as I said yesterday, "is the glass half full or is it half empty?" Sure it is Science fiction however I lean towards DaveC's comment above.

2050 is when I read

Hi Steve,

I suggest reading about philosopher John Searle's "Chinese Room" argument. It brilliantly and devastatingly shows (and has never been convincingly refuted) that, and why, a computer does not understand anything. It shows that the emperor of computer "intelligence" really has no clothes.

Here is a good link. It is a maybe not easy but well worth reading; it also features nice animations:

Searle and the Chinese Room Argument

(for the animation towards the bottom of the first page, you need to not just click on "begin", but also on the arrow in the image that then opens up)

The "next" page then explains the "Robot reply" and Searle's answer to it.

Here is a video that vividly explains what happens in the "Chinese Room"

 
Last edited:
Hi Steve,

I suggest reading about philosopher John Searle's "Chinese Room" argument. It brilliantly and devastatingly shows (and has never been convincingly refuted) that, and why, a computer does not understand anything. It shows that the emperor of computer "intelligence" really has no clothes.

Here is a good link. It is a maybe not easy but well worth reading; it also features nice animations:

Searle and the Chinese Room Argument

(for the animation towards the bottom of the first page, you need to not just click on "begin", but also on the arrow in the image that then opens up)

The "next" page then explains the "Robot reply" and Searle's answer to it.

Here is a video that vividly explains what happens in the "Chinese Room"


I guess we hope to live long enough to see if science fiction becomes science fact in the next 30 years
 
  • Like
Reactions: Ron Resnick

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu