There is a literal avalanche of literature and film treating the subject of robots, robot wars and the rise of the machines. There are technologists, philosophers and futurists who love to talk about our “mind children” and how we will evolve into our own creations. The most recent Terminator installment seems to carry on this long tradition of wondering just when our toasters will tire of their carbon based masters and rise up against us. The Cylons chasing the Battlestar, the machines plugging us into the Matrix and the machines chasing around Sarah and John Connor all reveal something quite insightful about our relationship to machines. We are afraid. Why is this?
We present ourselves in modern technological society as intelligent world shapers who through our technology will solve problems…even save the world. If we let Science run free and unhindered by luddite concerns or ancient ethical systems, we’ll create wonders with our ingenuity. Yet we are still afraid. Futuristic technology has its optimists and pessimists for sure. For examples, one only has to look as far as Ray Kurzweil’s wonderful immortality or Bill Joy’s fear of the gray goo.
Apparently, a philosopher right here at Rutgers University, has been musing about whether robot warriors (aka terminators) will be our salvation. H+ magazine recently interviewed said philosopher about the promises of robot based warfare, which is very much a reality today in some sense. The interview is quite interesting in that it discusses how robots might make the military more moral in its warfare. One particularly interesting section is commentary on the work of Georgia Tech’s Ron Arkin in making super-moral, or more moral robot soldiers:
Robots might be better at distinguishing civilians from combatants; or at choosing targets with lower risk of collateral damage, or understanding the implications of their actions. Or they might even be programmed with cultural or linguistic knowledge that is impractical to train every human soldier to understand.
Ron Arkin thinks we can design machines like this. He also thinks that because robots can be programmed to be more inclined to self-sacrifice, they will also be able to avoid making overly hasty decisions without enough information. Ron also designed architecture for robots to override their orders when they see them as being in conflict with humanitarian laws or the rules of engagement. I think this is possible in principle, but only if we really invest time and effort into ensuring that robots really do act this way. So the question is how to get the military to do this.
So here is a scenario where our terminators could be programmed to “turn on us” if they don’t think the people are acting according to “humanitarian laws” (whatever those are and whatever side defines them). Interesting enough the famous laws of Robotics created by Issac Asimov read as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Many of you may remember that these laws were the subject of the film, iRobot (the book also contains the laws, but the film does not represent the book). The movie gives an interesting view on machine consciousness and how the three laws just might lead the robots to take over…for our own good of course. Mechanized warfare is here and here to stay. There will be robot warriors of some form or another, but the moment we think they can improve on human beings is the moment we forget that we are their creators. As such, we are afraid - for bad gods we will make.
Mankind once feared its capricious pantheon of gods, now we fear ourselves and the work of our own hands. We fear someday that they will be like us and rise up against us like our ancestor Cain. We know our sins will follow us into them and even John Conner may be unable to save us.
Is this inevitable, no. Is the pride of man such that we will likely create technologies which will continue to bring carnage and destruction on the earth - yes, very likely. Humanity has been telling itself that it needs to shake free of sophomoric ideas of sin and depravity, yet they remain in us. Checks and balances are needed because humanity is wicked. I am by no means a Luddite, but I do think we should give more care to that which we create.
We are not gods and we know it, so we remain afraid.