Загрузка страницы

Adversarial Examples for Deep Neural Networks

A lecture that discusses adversarial examples for deep neural networks. We discuss white box attacks, black box attacks, real world attacks, and adversarial training. We discuss Projected Gradient Descent, the Fast Gradient Sign Method, Carlini-Wagner methods, Universal Adversarial Perturbations, Adversarial Patches, Transferability Attacks, Zeroth Order Optimization, and more.

This lecture is from Northeastern University's CS 7150 Summer 2020 class on Deep Learning, taught by Paul Hand.

The notes are available at: http://khoury.northeastern.edu/home/hand/teaching/cs7150-summer-2020/Adversarial_Examples_for_Deep_Neural_Networks.pdf

References:

Goodfellow et al. 2015:

Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." International Conference on Learning Representations, 2015.

Szegedy et al. 2014:

Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. "Intriguing properties of neural networks." arXiv preprint arXiv:1312.6199, 2014.

Carlini and Wagner 2017:

Carlini, Nicholas, and David Wagner. "Towards evaluating the robustness of neural networks." In 2017 IEEE Symposium on Security and Privacy, pp. 39-57. IEEE, 2017.
Moosavi-Dezfooli et al. 2017:

Moosavi-Dezfooli, Seyed-Mohsen, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. "Universal adversarial perturbations." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1765-1773, 2017.
Chen et al. 2017:

Chen, Pin-Yu, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. "ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models." In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15-26, 2017.

Cheng et al. 2018:

Cheng, Minhao, Thong Le, Pin-Yu Chen, Jinfeng Yi, Huan Zhang, and Cho-Jui Hsieh. "Query-efficient hard-label black-box attack: An optimization-based approach." arXiv preprint arXiv:1807.04457, 2018.

Liu et al. 2017:

Liu, Yanpei, Xinyun Chen, Chang Liu, and Dawn Song. "Delving into transferable adversarial examples and black-box attacks." International Conference on Learning Representations, 2017.

Brown et al. 2018:

Brown, T. B., D. Mané, A. Roy, M. Abadi, and J. Gilmer. "Adversarial patch, 2017." arXiv preprint arXiv:1712.09665, 2018.

Wu et al. 2019:

Wu, Zuxuan, Ser-Nam Lim, Larry Davis, and Tom Goldstein. "Making an invisibility cloak: Real world adversarial attacks on object detectors." arXiv preprint arXiv:1910.14667, 2019.

Sharif et al. 2016:

Sharif, Mahmood, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition." In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528-1540, 2016.

Eykholt et al. 2018:

Eykholt, Kevin, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. "Robust physical-world attacks on deep learning visual classification." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625-1634, 2018.

Видео Adversarial Examples for Deep Neural Networks канала Paul Hand
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
5 июня 2020 г. 8:21:09
00:43:54
Яндекс.Метрика