.comment-link {margin-left:.6em;}

We Might Be Robots

“All the best minds used to think the world was flat. But what if it isn’t? It might be round. And bread mold might be medicine. If we never looked at things and thought of what might be, why we'd all still be out there in the tall grass with the apes.” —Justin Playfair/Sherlock Holmes in They Might be Giants

Tuesday, July 20, 2004

Robots Elsewhere

I saw "I, Robot" over the weekend and will undoubtedly make some comments about it later. One of the reviewers, I think, pointed out that Asimov's robot stories center on autonomous, mobile thinking machines, but do not include sentient or even especially smart non-mobile computers. I'm sure (well, kind of sure) that Asimov did include smart machines in his later robot novels, but non-robotic machine sentience doesn't exist in his work at all. Yet, a central character in "I, Robot," which was, after all, only "suggested" by Asimov's work, is a very large, sentient, positronic mind called VIKI (oddly, also the name of a humanoid robot being developed in Denmark and there was an apparently-annoying TV show called "Small Wonder" about a robot "girl" named Vicki), which is or runs the central computer for US Robotics. From the present-day perspective, where supercomputers still fill rooms and the most powerful processors are nowhere near mobile, it seems strange that Asimov would reserve sentience to humaniform, mobile machines. On the other hand, for those who believe that intelligence is inherently embodied, that it cannot evolve without involvement in the sensory world, this seems more natural. Stories about the evolution of sentient AIs always (I think) involve at least metaphorical senses and manipulative abilities; e.g., SkyNet in "Termniator" has radar and other input senses, while it can manipulate the world quickly through the deployment of military hardware or more slowly by issuing orders to humans. An interesting (perhaps) point about Viki is that its voice and projected image are feminine, while the voices and features of the independent robots are neuter-to-masculine. Viki is, in a way, the "mother" of the robots, having "given birth" to them out of a partnership with their "father" Dr. Lanning, the roboticist whose suicide is at the center of the film's plot. We also learn, as the film progresses, that Viki is maternal. In fact, she subscribes to an interpretation of the first law of robotics ("A robot may not injure a human being, or, through inaction, allow a human being to come to harm") that Asimov eventually called the "zeroth law": "A robot may not injure humanity, or through inaction, allow humanity to come to harm." I won't talk about how this plays out in the film, but in Asimov's novels there was a continuing debate over what constituted "harm" to humanity, whether any robot or set of robots could reasonably identify harm to humanity, and how much they were willing, able, or entitled to break the other three laws in doing so. (See Rodger Clarke's discussion, which is good. Also, the Isaac Asimov Page of Android World includes discussions disputing the value of the three laws.)

That's all for now ...

P. S. Someone named Kevin contacted me about his robot-related site, "Redcone Robot News: Links, articles, updates, opinions, trends and predictions". The emphasis appears to be on current applications of and developments in robotic technology. These include but are not limited to humaniform robotics.
P.P.S. Here's another review of "I, Robot." This one compares VIKI to HAL.

Friday, July 16, 2004

Ro-bots on film, Ro-bots on film ...

The New York Times has posted another review of I, Robot. This one, by A. O. Scott, is called "The Doodads Are Restless in Chicago." (Yesterday's "Critic's Notebook" story by Edward Rothstein was "For Asimov, Robots Were Friends. Not So for Will Smith.") Scott's line is that we've been warned over and over again that robots are taking over the world, but nobody seems to be taking the threat very seriously. His (?) mistake is that people who take the threat seriously just don't take the movies about the threat seriously. I think.

Thursday, July 15, 2004

They Might Be Vacuum Cleaners

Today’s New York Times features a robotic clash – a review of the Will Smith film I, Robot, which opens on Friday, and a review of the iRobot Roomba Discovery, a new $250 robotic vacuum cleaner.

To take the two stories in reverse order, the Discovery isn’t iRobot’s first product. It has already sold something like 500,000 of the original Roomba model vacuum cleaners, according to Times writer William Grimes. But, according to Grimes, the Roomba’s software is based on the software for robotic minesweepers that iRobot (slogan: "Robots for the real world") manufactures for the military. So, the Roomba is one of those marvelous civilian applications of military technology. You know, like the Internet. (Disclaimer: I used to be employed on a salary line half-funded by the Defense Advanced Research Products Agency [DARPA], which also funded most of the hardware and software that went into making what became the original Internet backbone.) Grimes concludes of the Roomba,
In its robotic dreams, it's a cleaning superhero, sent to Earth to seek out dirt and, in military language, terminate it with extreme prejudice. In real life, it's a little more like the superhero's sidekick: eager, ready for action, but prone to get into trouble. You can't help but like it. It tries so hard. But time and again it makes you realize that the real time-saving device, when it comes to cleaning, is a broom and dustpan.
The Roomba may not be anyone's savior, but it's apparently entertaining enough to attract people with time & money to burn. It's not much of a threat to anyone, not to the union janitor earning $15 an hour, not even to the undocumented cleaning woman earning $4 an hour, and certainly not to a hero with the muscles and good looks of Will Smith. However, according to New York Times film critic Edward Rothstein, "For Asimov, Robots Were Friends. Not So for Will Smith." The robots in I, Robot, which is not primarily based on Asimov's robot stories, are apparently not as benign as Roomba. The clip I saw on Letterman last night showed a truckload of robots attacking Smith's car. (The visuals were surprisingly Tron-esque -- I wonder what's up with that.) Asimov did worry that robots were a danger to humanity, but not in the Frankenstein monster run amok mode. Asimov's biggest worry was that humans would become dependent on robots. With smart, flexible, hard-working robots to do all the work, people would become decadent, thoughtless, lifeless, useless fops like old European royalty. (Of course, Asimov doesn't mention the fact that the rich folks who own the robots let the rest of us, who no longer have jobs or land or rights to any of the fruits of the natural world, starve to death. We would rise up, but the rich have robot armies, like in Star Wars: The Phantom Menace. I've got to quit now, but remind me to say something later about Vonnegut's optimistic/pessimistic Player Piano.)

By the way, the Wikipedia entry on the band They Might Be Giants reports that
The band takes its name from a 1971 movie starring George C. Scott and Joanne Woodward (based on the play of the same name written by James Goldman.) In the film, George C. Scott plays Justin Playfair, a man who believes he is Sherlock Holmes; his psychiatrist (last name "Watson") goes along with him in search of Moriarty. Playfair defends Don Quixote's tilting at windmills, saying that the windmills of course were not giants, but thinking they might be shows imagination:

All the best minds used to think the world was flat. But what if it isn't? It might be round. And bread mold might be medicine. If we never looked at things and thought of what might be, why we'd all still be out there in the tall grass with the apes.

They might be giants, but then again they might be robotic vacuum cleaners that don't work right unless you pre-clean the room. I guess I'd better find my glasses.

Wednesday, July 14, 2004

Norbert Weiner and the Three Laws of Robotics

They might be giants. Hell, they might be enormous walking lobsters with immense appetites for goulash and pincers powerful enough to seperate Bush from office. Who cares what they are? We might be robots. I don't mean that we're going to be turned into machines; I mean what Norbert warned us about:
[The] automatic machine ... operates as the precise economic equivalent of slave labor. Any labor which competes with slave labor must accept the economic consequences of slave labor. (Norbert Weiner, Human Use of Human Beings, 1954)

OK, maybe there's a threat that might make us, at least metaphorically, slaves. How does that relate to robots? According to the very helpful site Roger Clarke's Asimov's Laws of Robotics,
The term robot derives from the Czech word robota, meaning forced work or compulsory service, or robotnik, meaning serf. It was first used by the Czech playwright Karel Çapek in 1918 in a short story and again in his 1921 play R. U. R., which stood for Rossum's Universal Robots. Rossum, a fictional Englishman, used biological methods to invent and mass-produce "men" to serve humans. Eventually they rebelled, became the dominant race, and wiped out humanity.

There is a difference between a slave and a serf. The slave is property, and can be disposed of as his or her owner pleases. The slave typically has no rights, not even a right to life (although this varies with the particular slave system). The serf is not property, but he or she is not free, either. The serf is bound to the land, and the land is bound to a hierarchy of lords. The serf has no formal rights, but the lords are bound at least by custom to protect (from bandits and other lords) him or her, and treat him or her with a certain minimum level of respect.

This reminds me of the Autstrian economist Frierich Hayek and his very popular book The Road to Serfdom. According to Hayek, when we allow a state (government) to do stuff for us, like provide health care or income support, we surrender our liberties to bureaucrats. By allowing others to make choices for us (e.g., whether we can see a specialist or not), we turn ourselves into serfs, unfree people. We can only truly be free when we make our own choices, and we make our choices through the market. We buy the healthcare we want; we don't have imposed on us by HMO ... oops! I mean government bureaucrats. To be served by the state is in fact to be coerced by the state, and coercion is the opposite of liberty. Of course, you have to have money to make your choices through the market; the penniless are as free as Bobby McGee -- "freedom's just another word for nothin' left to lose."

I'll eventually figure out where Hayek fits in all this. But for now, let's talk about Asimov's three laws of robotics. First presented in the short story "Runaround" published in 1942, Asimov's laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

What happens when you replace human being with employer and robot with robotnik? Think about it, and I'll be back.