Book Banter: The Three Laws of Robotics According to Isaac Asimov
Is I, Robot the worst book to movie adaptation I have ever seen? Yes. Yes it is.
Spoilers for the 2004 film I, Robot ahead.
I will probably review some Isaac Asimov books in the future but I need to start with the fundamentals of the majority of his books, the three laws of robotics.
The author of the three laws of robotics, Isaac Asimov, was a truly impressive man. Asimov was the Alexander Hamilton of the 20th Century, writing and editing over 500 books and an estimated 90,000 letters and postcards (according to his brother Stanley). He was an author and a professor of biochemistry. He was awarded 14 honorary doctorates from various universities, and is considered by many to be the father of robotics.
The Three Laws
Asimov's three laws of robotics are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The three laws make it impossible for a robot to kill a human. They play an important role in the Robot series, where they lay a foundation for all of the stories told.
The Robot series is a collection of 37 short stories and six novels by Isaac Asimov. I have read all of the novels and a few of the short stories. All the stories in the series have plots revolving around the interesting ways the three laws come in to play. My personal favourites are the Robots of Dawn series, a whodunit romp around the galaxy solving crimes involving robots. Did a robot murder a human? Did a human murder a robot? Is it murder to "kill" a robot? If you have a basic grasp of the three laws, you will know that a robot did not, in fact, murder a human.
Why are the laws important?
Good question! As mentioned previously, the rules play a foundational role in the plots of all the books of the Robot series. But are they relevant in today's world, where we are starting to actually create AI?
First off, it is important to note that the laws apply to robots with a positronic brain which allow their CPUs to function with a similar level of consciousness and self awareness as humans. Essentially, the robots we have created so far are nowhere near this level of AI. A company named iRobot created the Roomba vacuum cleaner, which is just a few codes short of having human consciousness. They've also created robotic mops. This makes it the second most embarrassing thing to have the "I" and "Robot" in its name (we'll get to the movie later).
Robots that can walk up and down stairs have also been invented, with views of them helping the elderly or differently abled in the future. However, Descartes would agree these are far from having consciousness anywhere near the level of humans, if at all. To me, this makes them more dangerous. If these robots are run off a Wi-Fi network, that makes them easily hackable, leading to potential reprogramming to cause harm. In the Robot series, it is impossible to "hack" a robot due to its level of consciousness. Where dangerous AI is currently being used (military drones), their programs are protected by the highest government security, making them also virtually "unhackable" (we hope).
Now that we've got a bit of background on the three laws, let's talk about the film I, Robot. If you haven't read the book, it's a fun Sci Fi romp starring Will Smith as Detective Spooner in the futuristic world of 2035. He doesn't trust robots because one's logic and calculations meant it saved him instead of a child during a car accident. You know, three laws stuff. When the CEO of the main robotics company Dr. Alfred Lanning dies, Smith's character is asked to take the case in a note left by him, even though the CEO's death has been ruled a suicide. A character that appears in both the books, Susan Calvin (although she's significantly aged down in the film for love interest purposes) aids him in his investigations. Any who, the robots turn bad (you can tell because their chests turn red) and it turns out it's because their hive mind, V.I.K.I, has adapted the three laws to suit their needs. How convenient. After the obligatory car chases, red herrings, and fight scenes, we come to find that Lanning intentionally killed himself to stop the robot uprising.
How does the film do Asimov wrong? Let me count the ways.
Susan Calvin is a pretty plot device rather than an actual character. Need some exposition? Calvin. Need to get into that locked room? Calvin. Need a pretty lady for the love interest to work with? Calvin. She is a shell of her character in the books, the matriarch of robotics who has a lifetime of stories about the three laws to tell.
Detective Spooner doesn't exist in the Asimov universe. Why not name him after Detective Elijah Baley in the aforementioned Robots of Dawn trilogy? Maybe it's because Baley isn't a tropey down-and-out detective who's had his badge taken off him and whose entire character arc is redeeming himself as the world's greatest cop. Baley isn't the greatest protagonist, he cheats on his wife and is of a dick in general. But hey, at least they have one thing in common, they don't trust robots!
Now to the egregious stuff. The very core of Asimov's books. The three laws of Robotics. All of which state and lead to this simple conclusion, robots cannot harm humans. Hmmmmm. Then what's this?
Looks like there's a fair bit of harm happening to humans here. Could it be argued that they technically didn't kill anyone? Perhaps, yes. But one wrong throw and someone's breaking their neck. This could have happened off screen.
Then there's this scene:
Apart from the fact the protagonist is protected by the thickest of plot armours, these robots are clearly trying to kill this human.
Yes the movies concludes that robots did not, in fact, kill the CEO. I think the movie makers think this means they to the three laws. But it's a big "fuck you" to any Asimov fan. The biggest fuck you?
That the movie OPENS with the laws.
Do you think Will Smith learnt his lesson about bad adaptations? Nope, just three years later he starred in I Am Legend. We'll rip that one apart next time...
So do we need to worry about killer robots anytime soon?
Probably not by I, Robot's time setting of 2035. In fact in 2021 we're now three years closer to 2035 than we are to 2004 (I really dropped the ball on a timely thesis). But we've found plenty of other ways to kill so successfully we'll probably never see a true positronic brain.
I think the biggest worry will be Will Smith getting his hands on a script for an American version of Doctor Who.
Note on references: Most of my information on the laws comes from the novel I, Robot unless I have linked elsewhere.