IROS 2017 Forum
Wednesday, Sept. 27, 0830-1230 (Venue: VCC Room 201)
|08:30 - 08:40||Welcome from Forum Organizers (Raj Madhavan, Ludovic Righetti, and Raja Chatila)|
|08:40 - 10:10||
Autonomous Weapons Systems8:40-9:10 Keynote: Vincent Boulanin (SIPRI, Sweden)
9:10-9:30 Invited Talk: Mary Wareham (Human Rights Watch, USA)
9:30-9:50 Invited Talk: Alan Schuller (Naval War College, USA)
9:50-10:10 Q&A on AWS
|10:10 - 10:40||Coffee Break|
|10:40 - 12:20||
Self-Driving Cars and Governance/Regulation Aspects10:40-11:10 Keynote: Jérôme Perrin (Renault S.A.S., France)
11:10-11:30 Invited Talk: Azim Shariff (UC Irvine, USA)
11:30-12:20 Q&A on SDCs and Governance/Regulation Practices
|12:20 - 12:30||Closing Remarks & Adjourn|
Governing the Advance of Autonomy in Weapon Systems: What role for the community of roboticists in the UN debate on Lethal Autonomous Weapon Systems?
The weaponization of autonomous systems has emerged in recent years as a major area of concern for the arms control community. The question of whether the development and use of ‘lethal autonomous weapon systems’ (LAWS) should be regulated is now the focus of a intergovernmental expert discussion with the framework of 1980 United Nations Conventions on Certain Conventional Weapons (CCW), a convention which deals with weapons that may be deemed to have an excessively injurious or indiscriminate effects. This presentation aims to provide the community of (civilian) roboticists with an overview of the discussions that have taken place so far on LAWS at the UN and 2) discuss the role they could and should play in that context. It will address three questions: what is the status of the diplomatic discussions on LAWS? What can we expect from the ‘group of governmental experts’ that has been formed to discuss the topic in November this year? How can the community of (civilian) robotic researchers engage in future discussions? The presentation will make the case that the conversation on LAWS at the UN is still very young and that it is unlikely to make significant progress unless governmental experts (i.e. diplomats) get a better understanding of the technological developments in the areas that are relevant for the future of autonomy in weapon systems. It will conclude that the community of (civilian) roboticists has key role to play in bringing greater conceptual clarity in the debate. It could also share its insight on some of the fundamental legal, ethical and practical issues that are raised by the progress of autonomy in weapon systems.
Dr. Vincent Boulanin (France/Sweden) is a Researcher at Stockholm International Peace Research Institute, where he works on issues related to the production, use and control of emerging military and security technologies, notably cyber-security and autonomous weapon systems. He has a PhD in Political Science from École des Hautes Études en Sciences Sociales [the School for Advanced Studies in the Social Sciences] in Paris. His recent publications include Mapping the development of autonomy in weapon systems, SIPRI Research Report (Forthcoming 2017), ‘Mapping the debate on LAWS at the CCW: taking stock and moving forward’, Non-proliferation Paper no. 49 (Mar. 2016) and ‘Implementing Article 36 weapon reviews in the light of increasing autonomy in weapon systems’, SIPRI Insights on Peace and Security no. 2015/1 (Nov. 2015).
Towards an Ethical Policy for Self-Driving Cars
The introduction of self-driving cars (SDC) should improve road safety by minimizing human driver errors. Hence their deployment will be submitted to safety requirements in potential dilemma and risk taking situations on the road. A common SDC ethics policy should be built on a consensus between the multiple stakeholders of the mobility eco-system: carmakers and equipment suppliers, drivers, other road users, infrastructure managers, insurance companies, regulatory authorities. This raises general issues regarding the underlying philosophy, moral casuistry and responsibility assignment, but also technical ones concerning the design of SDC decision-making algorithms that manage conflictual situations. If decision algorithms must prioritize the most vulnerable road users, be traceable and developed transparently in compliance with norms and standards to be defined, then how to deal with possible extension of deep machine learning approaches from perception to decision? How could we develop advanced testing and validation methods based on serious games in complex environments and use cases? How do we involve social acceptance and connect technical feasibility, legal compliance and ethical justification?
Jérôme Perrin is engineer from the Ecole Polytechnique in Paris (X74, 1977), doctor in physics (PhD + professor accreditation) from the University of Paris Denis Diderot (1983). Originally researcher of the CNRS (French Scientific National Research Center) in plasma physics and chemistry, he joined the Balzers & Leybold group - now Oerlikon – in 1997 as R&D director on plasma reactors for flat panel display and solar panel manufacturing. In 2001 he became R&D program director in Air Liquide, particularly on hydrogen and fuel cells. Then in 2007 he joined Renault as VP director of advanced engineering projects for the reduction of energy consumption and environmental impact of vehicles, towards the future electric mobility. In June 2012 he was appointed director & general manager of the newly created VEDECOM Institute for carbon-free electric, automated and connected vehicles and eco-mobility. Eventually in July 2014 he went back to Renault as VP Scientific Director. More recently he introduced the topic of ethics of autonomous vehicles in Renault. He is also bachelor in theology from the Catholic Institute of Paris (2013).
Fully Autonomous Weapons: Ethics & Governance Concerns
As coordinator of the Campaign to Stop Killer Robots, Mary Wareham of Human Rights Watch will address the global NGO coalition's major concerns with the prospect of fully autonomous weapons systems. She will look at the ethical issues, particularly relating to international law, specifically the Martens Clause, which addresses the “principles of humanity” and “dictates of the public conscience." Wareham will explain why the campaign calls for a preemptive ban on the development, production, and use of fully autonomous weapons rather than compliance with existing international law and standards. She will review the role of governments, civil society, the military, the scientific community, and general public in securing such a ban, drawing in particular on her past experience securing the 1997 treaty banning antipersonnel landmines and 2008 treaty prohibiting cluster munitions.
Mary Wareham is advocacy director of the Arms Division, where she leads Human Rights Watch’s advocacy against particularly problematic weapons that pose a significant threat to civilians. She is also serving as the global coordinator of the Campaign to Stop Killer Robots. From 2006 to 2008, Wareham served as advocacy director for Oxfam New Zealand, leading its efforts to secure an arms trade treaty and the 2008 Convention on Cluster Munitions. Wareham was senior advocate for the Arms Division of Human Rights Watch from 1998 to 2006 and was responsible for global coordination of the Landmine Monitor research initiative, which verifies compliance and implementation of the 1997 Mine Ban Treaty. From 1996 to 1997, Wareham worked for the Vietnam Veterans of America Foundation, assisting Jody Williams in coordinating the International Campaign to Ban Landmines (ICBL), co-laureate of the 1997 Nobel Peace Prize together with Williams. Wareham worked as a researcher for the New Zealand parliament from 1995 to 1996 after receiving bachelor’s and master’s degrees in political science from Victoria University of Wellington.
At the Crossroads of Control: Artificial Intelligence and Machine Learning in Weapon Systems
Lawyers and scientists have expressed a need for practical, substantive guidance on the development of autonomy in weapon systems consistent with the principles of International Humanitarian Law (IHL). Artificial Intelligence (AI) and machine learning in particular pose challenges for IHL compliance, since this technology carries the risk that human judgment on lethal decisions could be functionally delegated to computers. Lawful employment of such technology depends on whether one can reasonably predict that the AI will comply with IHL in uncertain conditions. In this session we will consider objective principles for avoiding unlawful autonomy in weapon systems.
Lieutenant Colonel Alan L. Schuller, U.S. Marine Corps, served as an artillery officer before becoming a judge advocate. He has performed duties as a prosecutor, defense counsel, and unit commander. LtCol Schuller served as the Staff Judge Advocate (SJA), 3d Marine Aircraft Wing (Forward) while deployed to Afghanistan in 2010. He deployed in 2013 as the SJA for Special Purpose Marine Air-Ground Task Force - Crisis Response in support of operations in U.S. Africa Command. He deployed again in 2014 as SJA of the 24th Marine Expeditionary Unit and supported operations in U.S. Central Command. LtCol Schuller currently serves as an Associate Director of the Stockton Center for the Study of International Law at the U.S. Naval War College, where his research is focused on autonomy in weapon systems. LtCol Schuller is a Fellow with the Georgetown Center on National Security and the Law. His article "At the Crossroads of Control: The Intersection of Artificial Intelligence in Autonomous Weapon Systems with International Humanitarian Law" was recently published in Harvard Law School’s National Security Journal.
The Psychological Roadblocks to Autonomous Vehicles
Autonomous vehicles (AVs) promise a bright future. However, psychological challenges to widespread adoption loom as large as technical ones. I’ll discuss three issues: (a) the psychological biases and heuristics that will likely lead to media and public over-reaction to AV accidents, (b) the social dilemma that emerges from debates about how the AVs should navigate ethical dilemmas, and (c) the trust-challenges created by the opacity of AV driving algorithms. I’ll conclude with suggestions drawn from the psychological literature on how to overcome each of these challenges.
Azim Shariff is an Associate Professor of Psychology and Social Behavior at the University of California, Irvine. He completed his PhD at the University of British Columbia in Vancouver, Canada. His research broadly investigates moral psychology and how it applies to matters of pressing social concern such as religion, criminal punishment, and emerging scientific and technological trends. His work has been published in top academic journals such as Science and Nature Human Behavior, and in popular media outlets such as The New York Times and Scientific American.
Founder & CEO, Humanitarian Robotics Technologies, USA
Chair, IEEE RAS Robotics and Automation Research and Practice Ethics Committee (RARPEC)
Chair, IEEE RAS Special Interest Group on Humanitarian Technology (RAS-SIGHT)
Independent Research Group Leader
Max-Planck Institute for Intelligent Systems, Germany
Institut des Systèmes Intelligents et de Robotique
Université Pierre et Marie CURIE
Organized by the IEEE RAS Robotics and Automation Research and Practice Ethics Committee (RARPEC), this Forum is intended as a platform to exchange ideas and discuss the impacts and practice of robotics and automation (R&A) technologies in research, development, and deployment that appear to pose ethical questions for humanity.
With increased awareness and controversies surrounding R&A and AI, this Forum will focus on separating hype from reality by providing an objective and balanced treatment via invited presentations and two panel discussions. It is irrefutable that these technologies are evolving at a rapid pace and that they have the potential to transform and positively impact the lives of people. Perhaps equally undeniable are the fears and concerns associated with their development. While many concerns stem from the confusion surrounding such emerging (autonomous) technologies and a lack of understanding of current capabilities and limitations, their development also raises legitimate ethical and governance questions that should be debated within the community.
For this inaugural RARPEC Forum, we have chosen Autonomous Weapon Systems (AWS), Self Driving Cars (SDCs), and the Governance/Regulation concerns surrounding these issues as broad topics for discussions.
The Forum is anticipated to have the following schedule divided into two sessions:
- Session 1: Autonomous Weapon Systems
8:30a – 10:00a (1 Keynote + 2 Regular Talks + Q&A)
- Coffee Break: 10:00a-10:30a
- Session 2: Self-Driving Cars & Governance/Regulation Practices of Emerging Technologies
10:30a-12:30p (1 Keynote + 2 Regular Talks + Q&A)
We anticipate this Forum to draw a wide variety of audiences from industry, academia, and government. In addition, social/political scientists, ethicists, standards development organizations, legal specialists, and insurance agencies will also find the topics to be of interest.