General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsShould we let robots kill on their own?
Next week, from April 13th to April 17th, the second multilateral meeting on lethal autonomous weapons systems is taking place at the United Nations in Geneva. At the meeting, AJung Moon, an executive member and co-founder of the Open Roboethics initiative (ORi), a think tank that aims to foster active discussions of ethical, legal, and societal issues of robotics will report on the preliminary results of a survey created by her team examining public attitudes toward autonomous weapon robots.
Shell be joining a number of organizations supporting the Campaign to Stop Killer Robots, whose primary objective is the pre-emptive ban on fully autonomous weapons. There are really only two outcomes on this issue either the creation and spread of lethal autonomous weapons is banned or it isnt. And by not banning them well set a precedent that condones the moral and legal sovereignty of machines over our lives.
Once we approve the use of autonomous machines to kill without humans in the loop, were ceding whats known as, meaningful human control for these systems. While the moral ramifications of war are certainly different than something like the manufacture of self-driving cars, once weve justified autonomous systems making life-and-death decisions, its easy to imagine our reliance on them for everything else.
What were trying to do is demonstrate how the public feels about these issues, noted Moon in our interview about her survey which anyone can take and upcoming meeting at the United Nations. Its something the UN has to consider when discussing the future of weapons systems at an international level.
<snip>
Much more: http://mashable.com/2015/04/12/meaningful-human-control
Take the survey on Remote Operated Weapon Systems (ROWS) and Lethal Autonomous Weapons Systems here (LAWS): https://survey.ubc.ca/s/militaryrobots2015/
shenmue
(38,506 posts)Mommy
bananas
(27,509 posts)LanternWaste
(37,748 posts)Seems there are some tools we deem ethical to use to kill others while other tools are considered unethical for the same purpose. We're a weird, inconsistent, rationalizing species (although I'm the first to admit I'm using the classical definition of Autonomous Machine rather than the creative, speculative definition)
Renew Deal
(81,869 posts)Although they are programmed by humans, the weapons decide who to engage. It brings up all kinds of ethical and moral questions.
LanternWaste
(37,748 posts)Again, limiting my response to the Classical definition, rather than the sci-fi-inspired definition-- the latter being too speculative and vague to allow foundational discussion-- other than additional speculation.
The classical definition* being limited to: gain info (analyze/adapt) re: environment, work w/o human intervention, move w/o human assistance, avoid potentially harmful situations-- which, as it stands, means my auto vacuum cleaner fits well within that definition.
* as paraphrased by George A. Bekey in Autonomous Robots (2005)
Downwinder
(12,869 posts)No bad apples if they all run the same software.
The2ndWheel
(7,947 posts)Will we or won't we? If it's cheaper to do it this way, we will.
Then of course there's the question of who is we? Most people aren't going to have an actual say if it happens or not.
MindPilot
(12,693 posts)One_Life_To_Give
(6,036 posts)If the fully autonomous machine is a more effective killer. Once backed into a corner the loosing side is likely to deploy them as a last resort. Nice to say we would never field them. But I suspect many world powers will have clandestine capabilities should they become necessary.
Donald Ian Rankin
(13,598 posts)I think it entirely probable that at some point in the future robot soldiers will be better at avoiding civilian casualties than humans are, but I suspect we're some way off that point.