Towards Evaluating the Robustness of Neural Networks
This is a talk about adversarial attacks and defenses. The main question is
"How should we evaluate the effectiveness of defenses against adversarial attacks?"
After watching this talk, you will know what adversarial attacks and defenses are and you will have an overview of possible defense techniques. We look carefully at a paper from Nicholas Carlini and David Wagner ("Towards Evaluating the Robustness of Neural Networks", 2017).
If you have any questions or comment, don't hesitate to contact me. You find my email on the first slide.
References:
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. IEEE Symposium on Security and Privacy, 2016. Nicholas Papernot et al.
- Explaining and Harnessing Adversarial Perturbations. ICLR, 2015. Ian J. Goodfellow et al.
- Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and Privacy, 2017. Nicholas Carlini and David Wagner
- Towards Deep Learning Models Resistant to Adversarial Attacks. ICLR, 2018. Aleksander Madry et al.
- Adversarial Patch. NIPS, 2017. Tom B. Brown et al.
- Robust Physical-World Attacks on Deep Learning Models. CVPR, 2018. Kevin Eykholt et al.
Links:
- Nicholas Carlini's talk at the 38th IEEE Symposium on Security and Privacy: https://www.youtube.com/watch?v=yIXNL88JBWQ
- Nicholas Carlini's website: https://nicholas.carlini.com
- A brief introduction to adversarial examples: https://people.csail.mit.edu/madry/lab/blog/adversarial/2018/07/06/adversarial_intro/
Видео Towards Evaluating the Robustness of Neural Networks канала Marcel Bühler
"How should we evaluate the effectiveness of defenses against adversarial attacks?"
After watching this talk, you will know what adversarial attacks and defenses are and you will have an overview of possible defense techniques. We look carefully at a paper from Nicholas Carlini and David Wagner ("Towards Evaluating the Robustness of Neural Networks", 2017).
If you have any questions or comment, don't hesitate to contact me. You find my email on the first slide.
References:
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. IEEE Symposium on Security and Privacy, 2016. Nicholas Papernot et al.
- Explaining and Harnessing Adversarial Perturbations. ICLR, 2015. Ian J. Goodfellow et al.
- Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and Privacy, 2017. Nicholas Carlini and David Wagner
- Towards Deep Learning Models Resistant to Adversarial Attacks. ICLR, 2018. Aleksander Madry et al.
- Adversarial Patch. NIPS, 2017. Tom B. Brown et al.
- Robust Physical-World Attacks on Deep Learning Models. CVPR, 2018. Kevin Eykholt et al.
Links:
- Nicholas Carlini's talk at the 38th IEEE Symposium on Security and Privacy: https://www.youtube.com/watch?v=yIXNL88JBWQ
- Nicholas Carlini's website: https://nicholas.carlini.com
- A brief introduction to adversarial examples: https://people.csail.mit.edu/madry/lab/blog/adversarial/2018/07/06/adversarial_intro/
Видео Towards Evaluating the Robustness of Neural Networks канала Marcel Bühler
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Towards Evaluating the Robustness of Neural Networks](https://i.ytimg.com/vi/yIXNL88JBWQ/default.jpg)
![On Evaluating Adversarial Robustness](https://i.ytimg.com/vi/-p2il-V-0fk/default.jpg)
![Towards Deep Learning Models Resistant to Adversarial Attacks](https://i.ytimg.com/vi/zCaiyGeFsgA/default.jpg)
![Adversarial Examples for Deep Neural Networks](https://i.ytimg.com/vi/kxyacmVSGlI/default.jpg)
![PowerPoint 2013 Training - Creating a Presentation - Part 1 - PowerPoint 2013 Tutorial (Office 2013)](https://i.ytimg.com/vi/LTWf8Ck8Dk8/default.jpg)
![Trustworthy AI: Adversarially (non-)Robust ML | Nicholas Carlini Google AI | AI FOR GOOD DISCOVERY](https://i.ytimg.com/vi/qgsmd2LaZA4/default.jpg)
![Transfer Learning (C3W2L07)](https://i.ytimg.com/vi/yofjFQddwHE/default.jpg)
![Lecture 4.4 Adversarial attacks on AI - [AI For Everyone | Andrew Ng]](https://i.ytimg.com/vi/Exd6CLAYOh0/default.jpg)
![J. Z. Kolter and A. Madry: Adversarial Robustness - Theory and Practice (NeurIPS 2018 Tutorial)](https://i.ytimg.com/vi/TwP-gKBQyic/default.jpg)
![Shortcut key to Insert & Delete Slides in PowerPoint (2003-2016)](https://i.ytimg.com/vi/wsmjiGeo6ds/default.jpg)
![Lecture 16 | Adversarial Examples and Adversarial Training](https://i.ytimg.com/vi/CIfsB_EYsVI/default.jpg)
![Machine Learning | Artificial Neural Network](https://i.ytimg.com/vi/1BZm5RAljn0/default.jpg)
![Graph Theory Blink 3.3 (Graph percolation, perturbation and robustness)](https://i.ytimg.com/vi/TDc4YakqUZs/default.jpg)