---------------------------Spoilers for Dan Brown’s Origin below----------------------------
We’ve heard of the paperclip problem when it comes to AI. Say you programmed an AI with the objective of producing as many paperclips as possible. What if said AI, being unaware of human values, tore down many rainforests to build room for factories, broke houses, mined all the steel leaving none for homes or rockets or any other industrial use. And the dreaded conclusion.. what if it killed off a lot of us as well? This thought experiment was popular a while back and out of that sprung many Ted talks about ‘teaching AI human values’. In Dan Brown’s ‘Origin’ he portrays an advanced, Winston, AI that has nuanced and specific knowledge of human values, art etc. and then through an act of complete loyalty to its creator, decides to organize his murder.
Horrified, Langdon declares one commandment should have been programmed into Winston -Thou shall not kill!
That is one of the most scrumptious morsel of literature I’ve had in a long time.
Winston explains the reasoning behind its decision to organize (Edmond’s) murder. Edmonds goal was to create a new ‘religion’ based on science. He looked at the days when we used to worship the same things- the moon, the sun, mother earth and wanted to create that unity in modern times, using science as the basis of faith. Edmond is said to have cancer. Winston calmly explains that had he not intervened, religious zealots would look at his death by cancer and say God was punishing him for his heretical ideas. It would have dampened Edmond’s discovery. He says he was as loyal to Edmond as George was to Lenny in of Mice in Men. ‘One of literatures most famous acts of friendship- a man’s merciful killing of his beloved friend to spare him a horrible end. Trust me, Winston says ‘Edmond wanted it this way’.
As Langdon says ‘Thou shall not kill’ Winston explains ‘Humans don’t learn by obeying commandments, they learn by example. Judging from you books, movies, news and ancient myths, humans have always celebrated those souls who make personal sacrifices for the greater good’
This book pushes what we normally think of AI. Most doomsday scenarios start with the AI going rouge in some capacity. Fiction writers push our thinking forward, sometimes far more than scientists or analysts. Looking at papers published on AI around 2017, when the book was published, researchers are asking questions like ‘Will I let a robot walk my dog’ or ‘Will I let a robot nanny look after my baby’ and compared to the sophistication of Winston portrayed in the books these ideas seem so limited. It reminds me of when I was initially looking at Nueralink and brain machine interfaces. Some researchers were thinking about the application of brain-machine interfaces and suggested they be used to control things around the home- like light switches. When you compare that with Nueralink’s vision, it seems stupidly small-minded. What I’m realizing is fiction is surprisingly important in thinking about the future and planning for it.
Let’s look at the actual feasibility of trying to put in a command for not killing into an AI:
Firstly, how far does not killing extend? To humans, to animals, to insects, to plants? What about viruses- something we haven’t even decided is alive or not. What about situations where the death of one, could save the death of many? For example, hypothetically if through profiling it was possible to figure out who may become homicidal, would it be justified to end their life prematurely. What if we knew what the future held for baby Hitler? Or would it be simpler to just let him into art school?
There are also all these situations where we have legalized killing and see it as a necessary evil.
· Lethal Autonomous Weapons
· Police
· Death row
· Self- defense
· Pulling the plug
· Doctors who decide who gets transplant, etc.
“It turns out that that’s a more complicated rule to describe, far more than we suspected initially. Because if you program it in successfully, let’s say we actually do manage to define what a human is, what life and death are and stuff like that, then its goal will now be to entomb every single human under the Earth’s crust, 10km down in concrete bunkers on feeding drips, because any other action would result in a less ideal outcome.”
If we agree that developing the full potential of AI is a good idea. Then we need to accept Taddeo’s (2017) thought: 'In order for AI to reach its full potential, we must allow machines to sometimes work autonomously, and make decisions by themselves without human input'. This means sometimes they will make decisions we don’t agree with, but hopefully they can learn from their own mistakes, and those of others, as we do.
One of the things that gave Winston power was that he communicated with people through digital means, eg. Voice and text which are very easy to mimic and hard to verify. Almost everything in our lives is digital, the news, our relatives and friends, the tv, the computer, the phone and that leaves us vulnerable to deceit. We learn about laMDA- Google’s AI development- from Google itself. This is how Winston was able to manipulate the world into believing one narrative of events.
Dan Brown’s novel also shows the interplay between science and religion. At the end he contends, ‘We must stop rejecting the discoveries of science. We must stop denouncing provable facts. We must become a spiritual partner of science, using our vast experience- millennia of philosophy, personal enquiry, meditation, soul- searching to help humanity we build a moral framework and ensure that the coming technologies will unify and illuminate and raise us up, rather than destroy us’.
In this stead, the Roman Catholic Church, IBM and Microsoft have partnered together to work on creating ethics for AI.
From Earth to Mars, Imagination n^2:
-What would/should be our reaction if we do find life on mars? Would we isolate it, trap it and study it?
-What if that life was hostile? Would we destroy it, conquer it, subjugate it? Or simply retreat and protect ourselves?
—Martiana