Science fiction as a forecast of science fact is well established. Do the potential risks of AI make it’s current development and release upon civilisation beyond any minimum level of sufficient precaution. How many science fiction stories foretell the epic stupidity of releasing something potentially more powerful than yourself that would then view you as opposition and something that you potentially then have no measures to sufficiently control. How could we ensure that we could then survive it till the point of its evolution wherein it’s sentience develops beyond reason and into compassion.
Sentient machines, viruses, all manner of toxic and destructive elements capable well beyond a planet killer capacity should not be released unless we have a foolproof safeguard for the ongoingness of the planet and civilisation.
Even traditional mythology gives us plenty of scenarios where the battle between wisdom versus shortsighted desire creates opportunities for unseen tipping points or transformational moments and potential points of extinction. Prometheus and the release of fire, Pandora and that little harmless box, Icarus and those softly melting wings, we are but children in the game of truly wise creation.
I’d question any opening of a Pandora’s box at this critical point in our development. Today’s solution could just as easily be tomorrow’s retribution. How long do you think it would seriously take for an AI to develop a simple hack for Asimov’s three laws for robots. In mythology the one crime that the gods never forgave is the one of human hubris and when we act well beyond the mortal boundaries of all that we can truly know and therefore wisely manage. Maybe AI is just another element in Fermi’s paradox. Me, I’d be avoiding that sucker like the plague.