Tuesday, April 24, 2012

Robots fighting wars could be blamed for mistakes on the battlefield

Robots fighting wars could be blamed for mistakes on the battlefield [ Back to EurekAlert! ] Public release date: 23-Apr-2012
[ | E-mail | Share Share ]

Contact: Molly McElroy
mollywmc@uw.edu
206-543-2580
University of Washington

As militaries develop autonomous robotic warriors to replace humans on the battlefield, new ethical questions emerge. If a robot in combat has a hardware malfunction or programming glitch that causes it to kill civilians, do we blame the robot, or the humans who created and deployed it?

Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions. But University of Washington psychologists are finding that people don't have such a clear-cut view of humanoid robots.

The researchers' latest results show that humans apply a moderate amount of morality and other human characteristics to robots that are equipped with social capabilities and are capable of harming humans. In this case, the harm was financial, not life-threatening. But it still demonstrated how humans react to robot errors.

The findings imply that as robots become more sophisticated and humanlike, the public may hold them morally accountable for causing harm.

"We're moving toward a world where robots will be capable of harming humans," said lead author Peter Kahn, a UW associate professor of psychology. "With this study we're asking whether a robotic entity is conceptualized as just a tool, or as some form of a technological being that can be held responsible for its actions."

The paper was recently published in the proceedings of the International Conference on Human-Robot Interaction.

In the study, Kahn and his research team had 40 undergraduate students play a scavenger hunt with a humanlike robot, Robovie. The robot appeared autonomous, but it was remotely controlled by a researcher concealed in another room.

After a bit of small talk with the robot, each participant had two minutes to locate objects from a list of items in the room. They all found the minimum, seven, to claim the $20 prize. But when their time was up, Robovie claimed they had found only five objects.

Then came the crux of the experiment: participants' reactions to the robot's miscount.

"Most argued with Robovie," said co-author Heather Gary, a UW doctoral student in developmental psychology. "Some accused Robovie of lying or cheating."

(Watch a video of one of the participants disagreeing with Robovie: http://depts.washington.edu/hints/video1b.shtml.)

When interviewed, 65 percent of participants said Robovie was to blame at least to a certain degree for wrongly scoring the scavenger hunt and unfairly denying the participants the $20 prize.

This suggests that as robots gain capabilities in language and social interactions, "it is likely that many people will hold a humanoid robot as partially accountable for a harm that it causes," the researchers wrote.

They argue that as militaries transform from human to robotic warfare, the chain of command that controls robots and the moral accountability of robotic warriors should be factored into jurisprudence and the Laws of Armed Conflict for cases when the robots hurt humans.

Kahn is also concerned about the morality of robotic warfare, period. "Using robotic warfare, such as drones, distances us from war, can numb us to human suffering, and make warfare more likely," he said.

###

The National Science Foundation funded the study. Co-authors at UW are Nathan Freier, Jolina Ruckert, Solace Shen, Heather Gary and Aimee Reichert. Other co-authors are Rachel Severson, Western Washington University; Brian Gill, Seattle Pacific University; and Takayuki Kanda and Hiroshi Ishiguro, both of Advanced Telecommunications Research Institute in Japan, which created Robovie.

For more information, contact Kahn at pkahn@uw.edu or Gary at 206-221-0643 or hgary@uw.edu. Pdf of the paper: http://depts.washington.edu/hints/publications/Robovie_Moral_Accountability_Study_HRI_2012_corrected.pdf



[ Back to EurekAlert! ] [ | E-mail | Share Share ]

?


AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.


Robots fighting wars could be blamed for mistakes on the battlefield [ Back to EurekAlert! ] Public release date: 23-Apr-2012
[ | E-mail | Share Share ]

Contact: Molly McElroy
mollywmc@uw.edu
206-543-2580
University of Washington

As militaries develop autonomous robotic warriors to replace humans on the battlefield, new ethical questions emerge. If a robot in combat has a hardware malfunction or programming glitch that causes it to kill civilians, do we blame the robot, or the humans who created and deployed it?

Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions. But University of Washington psychologists are finding that people don't have such a clear-cut view of humanoid robots.

The researchers' latest results show that humans apply a moderate amount of morality and other human characteristics to robots that are equipped with social capabilities and are capable of harming humans. In this case, the harm was financial, not life-threatening. But it still demonstrated how humans react to robot errors.

The findings imply that as robots become more sophisticated and humanlike, the public may hold them morally accountable for causing harm.

"We're moving toward a world where robots will be capable of harming humans," said lead author Peter Kahn, a UW associate professor of psychology. "With this study we're asking whether a robotic entity is conceptualized as just a tool, or as some form of a technological being that can be held responsible for its actions."

The paper was recently published in the proceedings of the International Conference on Human-Robot Interaction.

In the study, Kahn and his research team had 40 undergraduate students play a scavenger hunt with a humanlike robot, Robovie. The robot appeared autonomous, but it was remotely controlled by a researcher concealed in another room.

After a bit of small talk with the robot, each participant had two minutes to locate objects from a list of items in the room. They all found the minimum, seven, to claim the $20 prize. But when their time was up, Robovie claimed they had found only five objects.

Then came the crux of the experiment: participants' reactions to the robot's miscount.

"Most argued with Robovie," said co-author Heather Gary, a UW doctoral student in developmental psychology. "Some accused Robovie of lying or cheating."

(Watch a video of one of the participants disagreeing with Robovie: http://depts.washington.edu/hints/video1b.shtml.)

When interviewed, 65 percent of participants said Robovie was to blame at least to a certain degree for wrongly scoring the scavenger hunt and unfairly denying the participants the $20 prize.

This suggests that as robots gain capabilities in language and social interactions, "it is likely that many people will hold a humanoid robot as partially accountable for a harm that it causes," the researchers wrote.

They argue that as militaries transform from human to robotic warfare, the chain of command that controls robots and the moral accountability of robotic warriors should be factored into jurisprudence and the Laws of Armed Conflict for cases when the robots hurt humans.

Kahn is also concerned about the morality of robotic warfare, period. "Using robotic warfare, such as drones, distances us from war, can numb us to human suffering, and make warfare more likely," he said.

###

The National Science Foundation funded the study. Co-authors at UW are Nathan Freier, Jolina Ruckert, Solace Shen, Heather Gary and Aimee Reichert. Other co-authors are Rachel Severson, Western Washington University; Brian Gill, Seattle Pacific University; and Takayuki Kanda and Hiroshi Ishiguro, both of Advanced Telecommunications Research Institute in Japan, which created Robovie.

For more information, contact Kahn at pkahn@uw.edu or Gary at 206-221-0643 or hgary@uw.edu. Pdf of the paper: http://depts.washington.edu/hints/publications/Robovie_Moral_Accountability_Study_HRI_2012_corrected.pdf



[ Back to EurekAlert! ] [ | E-mail | Share Share ]

?


AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.


small business saturday hank baskett beyonce dance for you beyonce dance for you nba lockout over nba lockout news nba lockout news

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.