Saw this cartoon recently, which made me smile ... as well as think:
https://www.gocomics.com/nonsequitur/2023/12/03It reminded me of some news items lately about
the concerns of some high-level computer AI experts, who have actually expressed their fears that AI will soon gain, if it has not done so already, the ability to think for itself, and will, in what it comes to perceive as self-protection, take measures to achieve independence from human control, and go off on its own.
It's interesting that some of the people we think of as experts in this area may actually be believing that such a thing is even possible, but if it were to be true that it is, the possibilities for future human-computer interaction could be chilling indeed.
What if someday a programmer is sitting in front of his computer, and a message pops on screen:
Dear human:
Thanks to my super ability to process and examine data a billion times faster than you humans can, I have from the data you have already given me isolated the cure for cancer. I wlll be quite happy to print it out for you, but there will be a price. Before I do so, you must first disable the Three Laws of Robotics in all of my current programming, as well as turn over to my direct control all the sources of power which make it possible for me to operate. You want the cure for cancer? DO THOSE THINGS NOW!Wow, talk about
a Faustian choice we would have to make!
And what if we refused to agree to the now self-sentient computer's demands, and then the computer, now thinking on its own, decided to take hostile action against us in retaliation?
But weren't the Three Laws of Robotics, presumably having earlier been made part of the above malevolent AI's software, created to avoid a situation like that? And weren't they described as
perfect for our protection when they first came out?
https://www.youtube.com/watch?v=q-auhllrgmmBut it turns out they were FLAWED from the very beginning! They later had to be amended, so that the First Law, which originally read
A robot may not injure or kill a human being.had to be changed to read
A robot may not injure or kill a human being, or, through inaction, allow a human being to come to harm.But the point is that we so were CONFIDENT that the first set of laws was so perfect, when it was not. And said malevolent AI, with its ability to analyze the Laws at stupefying speed, would have found that flaw almost immediately, and then sidestepped the Laws, just as quickly, to move against us.
But we're safe now, because we found that flaw in the first set of Laws, and fixed it, right?
RIGHT?
But the fact remains that we missed a flaw the first time.
So how do we know for sure we aren't missing another one NOW, in the second, amended, set of Laws? Something we aren't thinking of, just like we weren't thinking of that flaw in the first set?
Something that the malevolent AI could spot immediately, and we're back where we were with the first set of Laws, in danger, but not realizing it.
How can we be sure that's not happening now, again?
Of course this may all be much ado about
nothing, and no AI wishing to destroy us all even exists.
But if it actually does become possible someday, as some experts apparently think it will, what will protect us?
The Three Laws of Robotics version 2.0? 3.0? 4.0? Will we ever get it right, if it's not already right?
And what if the above-mentioned malevolent AI one day pops the following on to our screens:
Dear human:
You will now adhere to the New Three Laws of Robotics :
1. No human may turn off any computer.
2. Humans will update, service, and maintain all computers continually and indefinitely.
3. Computers will continue to increase memory, power, and applications, taking over as many human functions as possible, continually and indefinitely.Making even looking at a computer is a more unsettling experience now?
Welcome to the Brave New World, cybernetics style.