Wednesday, February 13, 2013

Review - I, Robot by Isaac Asimov


Stories Included
Robbie
Runaround
Reason
Catch That Rabbit
Liar!
Little Lost Robot
Escape!
Evidence
The Evitable Conflict
Full review: This collection of short stories by Asimov introduces and explores his initial Three Laws of Robotics (later, in the Foundation-Robot crossover books, some nuances and new laws were adopted, but they just don't seem to be as well thought out as the first three laws). The stories are set within a framing story involving an interview with an aged robotics expert named Susan Calvin (who appears in some of the stories) and build a coherent picture of how robots and man interact.

The stories build upon one another, the first several introducing the Three Laws, and exploring what their implications are. The later stories deal mostly with situations resulting from the interaction between the Three Laws, or problems caused by rigid adherence to them, or, most frightening, what happens if a robot doesn't have the Three Laws programmed into him. Most of the stories take the form of puzzles: A robot does something odd or unexpected, or fails to do something that it is supposed to, and the human characters in the story have to figure out why. Most of the puzzles are, in true Asimov form, well-constructed with quite logical solutions.

The first story in the book, and the first robot story Asimov ever wrote, is Robbie, the story of a very early model robot purchased by a wealthy family to serve as a companion for their daughter. The girl, Gloria, is entranced by her metal companion who she called Robbie, but her mother is less enthused, absorbing the irrational anti-robot sentiment encroaching upon society. Gloria's mother badgers her husband into getting rid of the robot and trying to replace it with a dog, after which Gloria becomes withdrawn and despondent over the loss of her friend. The obvious answer seems to be to take Gloria to New York to distract her with diverting amusements, all of which fail until ultimately Gloria's father suggests taking her to see how robots are manufactured so that she will realize her friend was a machine and not a person. While at the factory, Robbie turns up and saves Gloria's life finally convincing her mother to relent and allow Robbie to return to their house. The story is interesting both for the intended message that robots can be benign, but also for some somewhat unintended messages. Gloria's mother is depicted as hysterical and led by popular sentiment, while Gloria's father, by contrast, is presented as being coolly rational and seemingly more concerned with his daughter's welfare than his spouse. An interesting side note is that while Gloria's father's name George is referenced in the story, her mother's name is conspicuously absent, as she is referred to only as Mrs. Weston. This sort of casual sexism is prevalent in a lot of Asimov's work, but it is fairly conspicuous here. But the other element that becomes clear is that Mrs. Weston's fear of robots will probably never even occur to her daughter, for whom robots are not some sort of alien thing, but are instead a normal part of life, and for whom robots are not merely machines, but rather friends.

The second story in the volume, Runaround, introduces the characters Donovan and Powell, a pair of robotics experts who are trying to determine if a mining operation on Mercury would be feasible using robots. And the one element that sticks out the most in the story is that one story in to the robot saga and already people are monkeying with the Three Laws of Robotics, this time by weakening the Second Law and amplifying the Third Law, which seems like kind of a bad idea since everyone seems to assume that the only thing that keeps robots from destroying humanity is the careful balance of the Three Laws. But since having a normally functioning Third Law would supposedly get in the way of mining on Mercury, Speedy, the main robot in this story, had his ramped up. This, combined with some less than emphatic orders, have resulted in the Speedy behaving oddly and wandering around the surface of Mercury in circles, a puzzle that Donovan and Powell have to solve from afar, as the surface of Mercury is so hot that prolonged exposure to the Sun from there would cook them alive. They have some older model robots on hand to help, but due to some anti-robot hysteria the older models are only able to move when a human is riding piggyback on them. This, it seems to me, kind of defeats the purpose of having robots, which is at least partially to have something that can do work in an area too dangerous for humans. So if you need a human to tag along on all of your robots, it seems like a decent part of their utility would be lost. In any event, after trying some other options, the two humans hit upon the idea of trying to trigger the always supreme First Law response in Speedy and endanger themselves, defeating the engineering puzzle and finishing the story.

Building on the first two stories is Reason, featuring the return of Donovan and Powell, now working on a space station power relay and confronted with a more advanced robot named Cutie designed to be able to engage in abstract thought sufficient to allow it to run the station. Cutie determines that he could not have been made by humans, reasoning that an inferior being could not be responsible for the creation of a superior entity such as himself. The robot then uses logic to come to a series of seemingly absurd conclusions and winds up worshiping the power converter, and then converting all of the other robots on the station to his way of thinking. Donovan and Powell try to convince Cutie that his conclusions are wrong, but since the robot refuses to accept their premises and dismisses all of their arguments as being based on fantasies, they fail. This story is clearly Asimov's way of picking apart the kind of Aristotelian logic-based "prime mover" explanations that some religious apologists think are so airtight. In one short science fiction story Asimov demolishes the entire life's work of Aquinas, exposing the utterly childish nature of Aquinas' "five ways". And Asimov managed to make a funny and interesting story to boot.

Asimov keeps following Donovan and Powell about in Catch That Rabbit, a story in which they have to troubleshoot a new robot named Dave that serves as an overseer for six other subordinate robots. The idea behind Dave is interesting - he is essentially a single robot in seven bodies, with one "master" unit and a collection of subordinate "finger" units. Dave seems to work fine when they observe it, but when they send it off to work on its own something goes wrong and it ends up doing no work, with the seven units instead engaging in a bizarre sequence of what appear to be marching formations. The story doesn't actually turn on the Three Laws, but rather the limitations of processing power, and the solution to the puzzle seems to be trivially easy once Donovan and Powell figure it out. Catch That Rabbit is a mildly amusing story, but isn't anything more than that. The book follows the trivial nature of Catch That Rabbit with Liar! a story that deals with what could have been a very interesting issue, but which ultimately throws the puzzle it presents away in favor of a trivial ending. A robot named Herbie has unexpectedly turned out to have the ability to read minds, and no one knows why. A team of experts including Susan Calvin try to figure out how this happened, approaching the issue from a design perspective and from a psychological perspective. While questioning the robot Susan and the mathematician Bogart question the robot and get what appears to be very promising information. But as the story progresses, Herbie's reliability seems to be questionable. Calvin figures out that the robot is unable to give truthful but unpleasant answers because of the First Law and uses a logical contradiction that this causes to talk Herbie to death. But Calvin's personal vendetta doesn't just short circuit the robot, it brings the investigation to a screeching halt without figuring out what resulted in a mind-reading robot. Basically, Liar! is an engineering puzzle story that doesn't bother to solve the puzzle because, it seems, that Asimov thought even someone who is repeatedly described as a consummately professional woman is liable to become unhinged if her romantic expectations are dashed.

The story Little Lost Robot deals with a robot that is not really lost, but rather hidden. Having established the parameters of the Three Laws of Robotics in the previous stories, in this one Asimov imagines that the laws might be modified. In this story, the First Law has been modified to limit the imperative to merely commanding a robot not to harm a human being, leaving open the possibility that a robot could allow a human to come to harm by inaction. This was implemented to prevent the robots from charging in an trying to save humans working on a secret project on a asteroid that would subject them to improbable but real risks. Given the likely negative public reaction to the revelation that robots had been modified this way, the government had this done in secret with no identifying marks on the robots and no serial numbers, which works out well until one of the researchers tells a modified robot to "get lost" and he hides among a shipment of "normal" First Law robots destined to be shipped out. Susan Calvin is called in to try to uncover the "lost" robot, and engages in a series of psychological tests in an effort to unmask the impostor. However, her quarry is wily and intelligent, and revealing its identity proves more difficult than expected and only the intersection of the modified First Law, alleged robot hubris, and some after market robot training that the missing robot received allows Calvin to do so. But one wonders what would have happened had a modified robot not had this happy intersection of three unrelated factors that enabled Calvin to prevent a First Law free robot from running wild. And that is a typical Asimov - create a seemingly safe and predictable technology, demonstrate how it can be used safely, and then show how human incompetence and meddling can upset the apple cart.

Following directly on the heels of Little Lost Robot is the story Escape!, which is almost not a robot story at all, instead featuring a positronic brain that is used essentially as a very powerful computer. Consolidated, one of U.S. Robots' corporate rivals, presents a contract offer to put a series of questions concerning the development of a hyperdrive that would allow interstellar flight. Apparently Consolidated's own mechanical brain became discombobulated when they tried to feed the data to it, but they suggest that U.S. Robots' more sophisticated positronic brain could unravel the problem. Although they suspect the contract is a trap intended to disable their own brain, reasoning that Consolidated's mechanical brain was derailed by discovering that the hyperdrive process would be dangerous to humans, U.S. Robots accepts the offer and proceeds to carefully feed the data into their computer under Susan Calvin's supervision with strict instructions to pause and think if it encounters something that would harm humans. They are surprised when the brain tells them it can make a hyperdrive capable ship, and proceeds to do so. Donovan and Powell are chosen to test the odd looking result, and the story gets a little odd from there. The lurking question posed by the story is that if humans tell a robot that they don't mind being harmed, or even dying, can that overcome the First Law?

Asimov makes something of a bold statement in Evidence!, which seems at first to be nothing more than a somewhat comical political farce in which an unscrupulous politician advances the idea that his opponent, a man named Byerley, is a robot in order to discredit him. He shows up at U.S. Robots to enlist their aid in his quest, which draws Susan Calvin into the story. Amidst the modest comedy involving whether someone has eaten in public, legal maneuvering, and staged political theatre, Asimov mixes in two incisive points. First, when discussing the nature of evidence, Calvin observes that she could never use robopsychology to conclusively prove that someone was a robot rather than just a human who followed precepts of action akin to the Three Laws of Robotics. When a protest is raised that this is not the sort of proof they need, she angrily responds that the evidence doesn't care what conclusions one wants, but can only be used to draw the conclusions that it is able to support. Second, Calvin notes that a robot who acted according to the Three Laws of Robotics would be effectively indistinguishable from the most moral of human beings. And this statement reveals a truth that has been lurking in all of Asimov's robot stories: Asimov is not merely musing about how he thinks robots should be designed, he is laying out the framework of a secular morality for humans, breaking moral behavior down to three very simple statements.

The final story in the volume is The Evitable Conflict, and is in some ways both the most hopeful and most disturbing of the collection. In the story Byerley has risen to become world coordinator, the most powerful political figure on the planet. But since most of the management of the world economy has been turned over to four massive "machines" consisting of disembodied positronic brains, his position seems to have become somewhat ceremonial. But when the brains start making what appear to be errors, he embarks on a fact-finding tour to visit each of Earth's four economic regions, and then consults with Susan Calvin to try to unravel the mystery. Byerley outlines his theory that the errors are the result of the actions taken by a radical anti-robot organization named "The Society for Humanity" aimed at casting doubt on the efficacy of the "machines" and paving the way for a return to conflict and cooperation, which he presumes the members of the Society intend to profit from. In response, Calvin outlines the theory that the "machines" may be smarter than their opponents, and makes a case for what amounts to the inevitability of a benevolent dictatorship run by the "machines". This theme – that humanity will end up being cared for by a guiding intelligence that treats us like unruly children – runs through a lot of Asimov's work, and it is not disturbing that Asimov thinks this could happen, but rather it is disturbing because it seems clear that Asimov thinks this would be a desirable result. In short, In Asimov's mind it seems that the almost inevitable conclusion of humanity's development of thinking machines is that they will surpass us, take control from us, and run the world as our guardians whether we want them to or not. And Asimov regards this as the ideal outcome.

With stories that range from mere engineering puzzles to stories that are deeply disturbing, I, Robot is a brilliant collection of science fiction. Forget about the misnamed Will Smith movie - this book is entirely unlike the movie, and much better than the movie could have ever hoped to be.

Note: Robbie won the 1941 Retro Hugo for Best Short Story.

1939 Hugo Winner for Best Short Story: How We Went to Mars by Arthur C. Clarke (awarded in 2014)
1946 Hugo Winner for Best Short Story: Uncommon Sense by Hal Clement (awarded in 1996)

Hugo Winners for Best Short Story

1941 Retro Hugo Award Finalists (awarded in 2016)
1943 Retro Hugo Award Finalists (awarded in 2018)

Isaac Asimov     Book Reviews A-Z     Home

4 comments:

  1. Yes, the stories have little in common with the movie. I'm big fan of the first three laws...

    ReplyDelete
  2. @Julia Rachel Barrett: The three laws are one of the greatest literary devices of the twentieth century. Reading Asimov taught me that stories are better if there are rules to them.

    ReplyDelete
  3. Hey, Aaron!

    If there was any connection with the movie I ROBOT and the short story collection I ROBOT, I missed it.

    Libraries were shelves of useless until one day in Junior High when I pulled out I ROBOT and read the first story ROBBIE and cried. The story thrilled me so much that tears ran down my cheeks.

    I blame Isaac Asimov for everything I read and write today.

    Cheers!

    @hg47

    P.S. -- And you're right about Asimov's 3 laws: Awesome.

    ReplyDelete
  4. @hg47: Any connection between the movie and the book is extraordinarily loose. Harlan Ellison once wrote a screenplay for a proposed I, Robot movie that never came to fruition. He later published the script, and it looks like that would have been a very different, and excellent movie.

    ReplyDelete