Lethal Autonomous Weapons
Biography:
Stuart Russell received his B.A. with first-class honours in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences and holder of the Smith-Zadeh Chair in Engineering. He is also an Adjunct Professor of Neurological Surgery at UC San Francisco and Vice-Chair of the World Economic Forum's Council on AI and Robotics. He has published over 150 papers on a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and global seismic monitoring. His books include "The Use of Knowledge in Analogy and Induction", "Do the Right Thing: Studies in Limited Rationality" (with Eric Wefald), and "Artificial Intelligence: A Modern Approach" (with Peter Norvig).
Abstract:
Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans. LAWS might include, for example, armed quadcopters that can search for and eliminate enemy combatants in a city, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. The artificial intelligence (AI) and robotics communities face an important ethical decision: whether to support or oppose the development of lethal autonomous weapons systems (LAWS).
The UN has held three major meetings in Geneva under the auspices of the Convention on Certain Conventional Weapons, or CCW, to discuss the possibility of a treaty banning autonomous weapons. There is at present broad agreement on the need for "meaningful human control" over selection of targets and decisions to apply deadly force. Much work remains to be done on refining the necessary definitions and identifying exactly what should or should not be included in any proposed treaty.
Wednesday, April 6, 2016 from 12:00 PM to 1:00 PM (PDT)
Sutardja Dai Hall - Banatao Auditorium
University of California, Berkeley
Видео Lethal Autonomous Weapons канала CITRIS and the Banatao Institute
Stuart Russell received his B.A. with first-class honours in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences and holder of the Smith-Zadeh Chair in Engineering. He is also an Adjunct Professor of Neurological Surgery at UC San Francisco and Vice-Chair of the World Economic Forum's Council on AI and Robotics. He has published over 150 papers on a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and global seismic monitoring. His books include "The Use of Knowledge in Analogy and Induction", "Do the Right Thing: Studies in Limited Rationality" (with Eric Wefald), and "Artificial Intelligence: A Modern Approach" (with Peter Norvig).
Abstract:
Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans. LAWS might include, for example, armed quadcopters that can search for and eliminate enemy combatants in a city, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. The artificial intelligence (AI) and robotics communities face an important ethical decision: whether to support or oppose the development of lethal autonomous weapons systems (LAWS).
The UN has held three major meetings in Geneva under the auspices of the Convention on Certain Conventional Weapons, or CCW, to discuss the possibility of a treaty banning autonomous weapons. There is at present broad agreement on the need for "meaningful human control" over selection of targets and decisions to apply deadly force. Much work remains to be done on refining the necessary definitions and identifying exactly what should or should not be included in any proposed treaty.
Wednesday, April 6, 2016 from 12:00 PM to 1:00 PM (PDT)
Sutardja Dai Hall - Banatao Auditorium
University of California, Berkeley
Видео Lethal Autonomous Weapons канала CITRIS and the Banatao Institute
Показать
Комментарии отсутствуют
Информация о видео
8 апреля 2016 г. 22:22:36
00:52:35
Другие видео канала
Panel 1, Eric AdamsonCS267 - 3/20/2012Technology Future Shock: Society, Policy and Innovation in the Digital World4 Silicon Nano-PhotonicsIliana Oris ValientesMAP: Simple Measurement and Actuation for Integrating and Managing Physical DataSession 4 - 3 Piezoelectric Nano Electro Mechanical Systems for milli Volt and Few KT SwitchingTowards an Understanding of Cortical Function: Problems andFounder Stories - Knox Medical DiagnosticsField Experimentation on the Demand Side of Electricity MarketsRobots that Learn to Influence HumansThe Straight Story on UAVs in AgricultureInnovation and Entrepreneurship in Photonics for Health CareRenewable Energy Microgrid Testbed at NASA Ames Research CenterCS267 - 2/28/2012The Humanitarian Challenge - Panel DiscussionU.S. Chips & Science Act and The Future of Everything - Travis MosierThe Future of Search, Adam Beguelin (Truveo and AOL)Growth and Recycling of Continents by Kent CondieProximal and Physical Sampling with Agricultural Mobile RoboticsGilles Fedak