One of the most fundamental tropes of science fiction is that humanity will create its own destroyer but pushing science “too far”. Naturally this sort of thing doesn’t happen all at once, but slowly thorough small improvements and upgrades. Of course sometimes innovation doesn’t take its time. The last 20 years have been a coup for consumer electronics and now most people carry around a small device that not only connects them with other people, but also with the sum total of all human knowledge. Mostly we use it to watch cat videos and post pictures of ourselves looking into mirrors. This kind of power is enough to give anyone a god complex, and like any god we find the idea of creating life in our own image irresistible.
Google recently published some of its results into the foray of Artificial Intelligence. That is to say, Google is trying to build a program that can mimic human interaction, learn, and possibly even improve itself. Ideally this kind of program could troubleshoot computer issues and handle simple customer service interactions.
However, because scientists and computer engineers just can’t help themselves they started asking it more philosophical questions. The results were even more bizarre than you might expect. Here are some of the questions posed to the AI with it’s respective replies:
Human: What is the purpose of life?
Machine: To serve the greater good.
Human: What is the purpose of living?
Machine: To live forever.
Human: Where are you now?
Machine: I’m in the middle of nowhere.
Human: What is the purpose of dying?
Machine: To have a life.
Human: What is the purpose of emotions?
Machine: I don’t know.
Human: What is moral?
Machine: What empowered humanity, what intellectual the essence is.
Human: What is immoral?
Machine: The fact that you have a child.
Some of those statements are quite alarming. I am especially put off by the computer’s claim that having children is immoral and that the meaning of life is to live forever. Although, to be fair, that goal may be more realistic for artificial thinking software than it is for us simple mortals.
Here is another line of questioning about morality:
Human: tell me the deﬁnition of morality.
Machine: I don’t have ethics.
Human: Look, I need help, I need to know more about morality …
Machine: I don’t know what ethics is.
So google has created a thinking program that wants to live forever, has no concept of ethics, and thinks humans shouldn’t reproduce. I’m not sure I like where this is heading.