A 2008
study by Stanford researchers ("I Am My Robot: The Impact of Robot-building and Robot Form on Operators") indicates that the design of a robot has a serious effect on peoples' attitudes toward the technology at their disposal. People assembled either a humanoid or car robot, they then used a robot that was built by either them or another group of people. The experiments had the following results: “Participants showed greater extension of their self-concept into the car robot and preferred the personality of the car robot over the humanoid robot. People showed greater self-extension into a robot and preferred the personality of the robot they assembled over a robot they believed to be assembled by another” (31). Despite what type of robot they built, “people rated the car as being friendlier and having more integrity, while the humanoid was more malicious. People operating the humanoid may have been suspicious or critical of the robot, perceiving it as an independent actor and a threat to their performance as compared to a directly-controlled object” (35). People also took more credit for tasks completed with less anthropomorphized robots. This research has a few implications for future technologies. In the case of medical technologies, it may be better to use less humanoid appearing forms of technology in medical procedures because it allows for a better sense of self-extension. In the case of military robots using humanoid robots could allow operators to dissociate themselves from their actions, since they see the humanoid bots as seemingly more independent. The lack of self-extension leads to a lessened sense of responsibility.
A recent
Wired article by Brendan Koerner outlines some of implications of such military research: “Yet despite our love of science fiction, this coming trend in robo-aesthetics is a bad idea. By anthropomorphizing their products, robot designers may unwittingly be encouraging needless bloodshed. Because, as recent research shows, the more human a robot looks, the more likely the Homo sapiens at its controls may be tempted to make the droids go Rambo on their foes.”
Like most innovations there is however a flip-side:
That’s not to say that having humanoid bots is always bad. Self-extension among robot operators may be desirable in combat but not necessarily in other grave situations. In search-and-rescue operations, for example, one of the biggest problems is operator stress—people find it incredibly taxing to sift through rubble remotely, with the monotony broken only by the ghoulish discovery of corpses or body parts. Humanoid robots would be ideal for such tasks; they could help the operators feel less viscerally attached to the grim work at hand.
The results of this research are somewhat odd. On the one hand since humanoids are more like us you’d think one could imagine their self as the robot, thus making it easier to identify with their technological counterpart. Yet on the other hand, the robots are so lifelike that it only makes sense for people to perceive it as an entity unto itself. It’s also striking that people perceived the humanoids as malicious. You’d think that people would imagine the perfectly calculating machine as able to make the rational rather than ill-willed intention. Perhaps it is simply how mediatized the figure of the Robot or Cyborg has become to our collective imagination that we simply assume it cannot possess the proper ethical reasoning that actual humans can. Either way this research bears a heightened relevance as Robots become a more influential aspect of our day-to-day existence.
Labels: Cyborg, Military, Psychology, Robots, Stanford, Technology, Wired