Загрузка...

[GreHack 2017] Efficient Defenses against Adversarial Examples for Deep Neural Networks

Following the recent adoption of deep neural networks (DNN) in a wide range of application fields, adversarial attacks against these models have proven to be an indisputable threat. Adversarial samples are crafted with a deliberate intention of producing a specific response from the system. Multiple attacks and defenses have been proposed in the literature, but the lack of better understanding of sensitivity of DNNs justifies adversarial samples still being an open question. This talk proposes a new defense method based on practical observations which is easy to integrate into models and performs better than state-of-the-art defenses. The proposed solution is meant to reinforce the structure of a DNN, making its prediction more stable and less likely to be fooled by adversarial samples. The extensive experimental study proves the efficiency of our method against multiple attacks, comparing it to multiple defenses, both in white-box and black-box setups. Additionally, the implementation brings almost no overhead to the training procedure, while maintaining the prediction performance of the original model on clean samples. A live demo of creating adversarial images will take place during the talk.

Видео [GreHack 2017] Efficient Defenses against Adversarial Examples for Deep Neural Networks канала GreHack
Яндекс.Метрика
Все заметки Новая заметка Страницу в заметки
Страницу в закладки Мои закладки
На информационно-развлекательном портале SALDA.WS применяются cookie-файлы. Нажимая кнопку Принять, вы подтверждаете свое согласие на их использование.
О CookiesНапомнить позжеПринять