Who’s responsible when the robot (or automation) is wrong?

Interesting research (PDF link) on how people behave when robots are wrong. In a recent paper, researchers created a situation where a robot mis-directed a human in a game. In follow-up interviews, one of the striking findings that caught my eye was:

When asked whether Robovie was a living being, a technology, or something in-between, participants were about evenly split between “in-between” (52.5%) and “technological” (47.5%). In contrast, when asked the same question about a vending machine and a human, 100% responded that the vending machine was “technological,” 90% said that a human was a “living being,” and 10% viewed a human as “in-between.”

The bottom line was that a large portion of the subjects attributed some moral/social responsibility to this machine.

Taken broadly, the results from this study – based on both behavioral and reasoning data – support the proposition that in the years to come many people will develop substantial and meaningful social relationships with humanoid robots.

Here is a short video clip of how one participant reacted upon discovering Robovie’s error.

I wonder if similar results would be found when people interact with (and make attributions to) less overtly humanoid systems (disembodied automated systems like a smartphone app).

(via Slate)

Similar Posts (auto-generated):

About Richard Pak

Associate Professor at Clemson University/Department of Psychology
Comments are closed.
%d bloggers like this: