.comment-link {margin-left:.6em;}

We Might Be Robots

“All the best minds used to think the world was flat. But what if it isn’t? It might be round. And bread mold might be medicine. If we never looked at things and thought of what might be, why we'd all still be out there in the tall grass with the apes.” —Justin Playfair/Sherlock Holmes in They Might be Giants

Thursday, August 05, 2004

Whose Robot Empire Is It Anyway?

In her essay "Will robots take over the world?" (published this May in Focus magazine), Joanna Bryson says

I believe that *IF* robots take over the world, it will not be the work of the robots. Robots are artifacts we construct, we imbue them with their goals and their abilities. So there is no question of whether robots will take over the world. The question is whether someone else will take over the world *with* robots.

This is actually a very interesting question for AI in general: When an intelligent artifact makes a mistake, who is at fault? The point of the paper I wrote with Phil Kime on this topic is that:

  • we already have this problem with all kinds of artifacts,intelligent or not, and
  • if they *are* intelligent, nothing changes.
So a robot army going wrong and wiping out a village of civilians is just the same as a dam going wrong and wiping out a village of civilians. The manufacturers & operators of either artifact are the ones who must bear responsibility (& will probably spend a few decades suing each other).
Does this apply to sentient robots (or AIs, in general) with free will, like Star Trek 's Data? If so, this is like saying that if I become a mass murderer, it's not only my parents' responsibility, it is in fact their doing. They committed mass murder through me. (They did, after all, at least try, with regard to their children, to "imbue them with their goals and their abilities.") I don't accept this.

Only if robots are strictly rule-bound do I agree. Even then, there are the problems of interpretation that Asimov had so much fun with. What happens when the rules are fuzzy and the orders given to robots instructions are fuzzy, as they inevitably must be. The sentient but rule-bound robot must bear at least partial resposibility for the interpretations it makes and the actions it takes.

0 Comments:

Post a Comment

<< Home