Gort, Klaatu Barada Nikto. Has Google Engineer Blake Lemoine spilled the beans on AI trouble?
Here we go again, robot lovers.
Just recently, an Engineer at Google got in hot water because he questioned the sentient abilities (or path) on which Google has taken artificial intelligence. In essence, Blake proposed that one of Google’s artificial-intelligence language models, called LaMDA (Language Models for Dialogue Applications), could be self-aware.
Who warned us of the dangers of AI? None other than last week's blog subject, Stephen Hawking.
“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.” -Stephen Hawking, Lisbon, Portugal, November 2017
He warned that "...it could develop a will of its own."
The first thing I noticed in response to Lemoine's "administrative leave" was how quickly the media flew to discredit the guy, calling him a victim of the ELIZA EFFECT.
What is the ELIZA EFFECT? "the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers" (Douglas Hofstadter, PhD.)
The leap to brand this well-meaning Google guy as misleading is possibly misleading. How is that? The obvious question is, do we really think that Google or any other tech firm working diligently to improve AI is going to come right out and say "Hey! We got one!" Hell no. These companies are about money, and new things generate money, and in order to get the money, you have to be the only company with the best new thing. That means keeping secrets.
This is implied in Hawking's (and Elon Musk's) concerns about AI. It's scary, because just as hubris can lead to the release of modified viruses, so too can that attitude result in technologies that can backfire. It reminds me of a joke: What are the last famous words of a fool? "Look what I can do!"
However, the tentacles of big tech are long, slimy, and penetrating, so they will use their power to instruct any media outlet to write off accusations of big tech labs playing with AI fire as the ELIZA EFFECT.
So, what happens next? Nothing. You won't know a thing until something undesirable happens. Asimov had it right with the Three Laws. And, here they are:
First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
When I first read Asimov, I thought, gee, if every robot has those laws implanted permanently in its programming, then we are good. But, to paraphrase a very intelligent scientist friend of mine, just because the rules say don't do it, that doesn't mean someone won't do it. He was referring to trying to produce a hybrid of humans and chimpanzees, which is completely illegal, yet some claim it has been attempted anyway. As a geneticist told me, there is not much difference between chimp DNA and human DNA.
Now, that being said, humans are constantly doing stuff we ought not to do. Our man, Blake Lemoine (probably with no foul intent), slapped us hard in the face. How long before that chatbot at Google leads to AI soldiers? Yes, the smell of Sci-fi here, but let's be honest, is there a powerful country on the planet that would not like to supplement their armed forces with AI robots that can do things on the battlefield that humans cannot?
When really smart people like Stephen Hawking say "be careful," maybe we should listen.
Thanks for reading. My Trilogy, The Lost Council Trilogy is complete with the just-released book, Time Means Nothing. And, I do have AI in my books. Please read them on Kindle Unlimited, Kindle, or paperback. The good part is that the AI in my story is the good guy!