The Morality of Artificial Intelligence in Warfare
What does AI in warfare really look like? What are the possible applications, and how will they be used? Who will use them and who will they be used against? This webinar discussed the future of AI in warfare from a practical standpoint and sought to make explicit how these weapons will be developed, deployed and used—and by whom. Rather than discussing the development of autonomous weapons in technical or logistical terms, this discussion aimed to tease out how autonomous weapons operate within and augment the current geopolitical landscape and what the moral consequences of their deployment might be. Our panelists also discuss what this means for those who work in the field of AI, and what responsibility we all have to ensure that the work we do leads to a more just and equitable future.
Laura Nolan is a senior software engineer who specialises in reliability in distributed software systems. In 2018, Laura left her role as a staff engineer at Google in response to the company's involvement in Project Maven, a Department of Defense program that aims to use machine learning to analyse drone surveillance video footage. As a member of the NGO International Committee for Robot Arms Control (ICRAC), Laura is part of a global campaign which aims to regulate the emerging category of autonomous weapons systems, which are weapons systems that independently select and engage targets without human input.
Laura holds an MSc in Advanced Software Engineering from University College Dublin, and is currently completing an MA in Strategic Studies at University College Cork.
Jack Poulson is the Executive Director of the nonprofit Tech Inquiry, where he leads the development of an open source tool for monitoring an international public/private interface (currently the Five Eyes alliance). [1] He was previously a Senior Research Scientist working at the intersection of natural language processing and recommendation systems in Google's AI division and, before that, an Assistant Professor of Mathematics at Stanford University.
[1] https://gitlab.com/tech-inquiry/InfluenceExplorer and https://techinquiry.org/explorer/
Our moderator, Branka Marijan leads the research on the military and security implications of emerging technologies. Her work examines ethical concerns regarding the development of autonomous weapons systems and the impact of artificial intelligence and robotics on security provision and trends in warfare. She holds a PhD from the Balsillie School of International Affairs with a specialization in conflict and security. She has conducted research on post-conflict societies and published academic articles and reports on the impacts of conflict on civilians and diverse issues of security governance, including security sector reform.
For more information on this series, please visit our website: https://uwaterloo.ca/artificial-intelligence-institute/events/webinar-series
Видео The Morality of Artificial Intelligence in Warfare канала WaterlooAI
Laura Nolan is a senior software engineer who specialises in reliability in distributed software systems. In 2018, Laura left her role as a staff engineer at Google in response to the company's involvement in Project Maven, a Department of Defense program that aims to use machine learning to analyse drone surveillance video footage. As a member of the NGO International Committee for Robot Arms Control (ICRAC), Laura is part of a global campaign which aims to regulate the emerging category of autonomous weapons systems, which are weapons systems that independently select and engage targets without human input.
Laura holds an MSc in Advanced Software Engineering from University College Dublin, and is currently completing an MA in Strategic Studies at University College Cork.
Jack Poulson is the Executive Director of the nonprofit Tech Inquiry, where he leads the development of an open source tool for monitoring an international public/private interface (currently the Five Eyes alliance). [1] He was previously a Senior Research Scientist working at the intersection of natural language processing and recommendation systems in Google's AI division and, before that, an Assistant Professor of Mathematics at Stanford University.
[1] https://gitlab.com/tech-inquiry/InfluenceExplorer and https://techinquiry.org/explorer/
Our moderator, Branka Marijan leads the research on the military and security implications of emerging technologies. Her work examines ethical concerns regarding the development of autonomous weapons systems and the impact of artificial intelligence and robotics on security provision and trends in warfare. She holds a PhD from the Balsillie School of International Affairs with a specialization in conflict and security. She has conducted research on post-conflict societies and published academic articles and reports on the impacts of conflict on civilians and diverse issues of security governance, including security sector reform.
For more information on this series, please visit our website: https://uwaterloo.ca/artificial-intelligence-institute/events/webinar-series
Видео The Morality of Artificial Intelligence in Warfare канала WaterlooAI
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Learning to Execute Prioritized Stacks of Robotic Tasks - Gennaro NotomistaIntegrated Additive Manufacturing and AI Platforms for Smart ManufacturingReinforcement learning in the real world - how to "cheat" and still feel good about it.Safety Assurance of AI-enabled Robotic SystemsWhat is AI? - Undergraduate VideoLet's Talk AI - AI's Role in Navigating Ice-Covered Waters with Zhao PanBayesian Principles for Learning MachinesAI-enabled Knowledge Prediction EngineConditional Generative Adversarial Networks: Iterative Generation and Holistic EvaluationIndustry Day - AI for Supply Chain - Nov 30th, 2021Let's Talk AI - Synthetic Data with Helen ChenKeeping Track of Entities Over Time, Minds, and Knowledge Sources2021 University of Waterloo GRADflix 3rd Place Winner: Ali NasrDeveloping Reinforcement Learning Agents that Learn Many SubtasksLogic-Based Computational Ethics for Autonomous AgentsHigh Performance ManufacturingLet's Talk AI - AI's Future with Gautam KamathFair and Optimal Prediction via Post-Processing - Han ZhaoOptimizing Pre-trained Clinical Embeddings for Automatic COVID-related ICD CodingAI, Conflict, and the Laws of Armed Conflict