Загрузка страницы

Attacking Machine Learning: On the Security and Privacy of Neural Networks

Nicholas Carlini, Research Scientist, Google

Despite significant successes, machine learning has serious security and privacy concerns. This talk will examine two of these. First, how adversarial examples can be used to fool state-of-the-art vision classifiers (to, e.g., make self-driving cars incorrectly classify road signs). Second, how to extract private training data out of a trained neural network.Learning Objectives:1: Recognize the potential impact of adversarial examples for attacking neural network classifiers.2: Understand how sensitive training data can be leaked through exposing APIs to pre-trained models.3: Know when you need to deploy defenses to counter these new threats in the machine learning age.Pre-Requisites:Understanding of threats on traditional classifiers (e.g., spam or malware systems), evasion attacks, and privacy, as well as the basics of machine learning.

Видео Attacking Machine Learning: On the Security and Privacy of Neural Networks канала RSA Conference
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
16 мая 2019 г. 20:22:34
00:48:27
Яндекс.Метрика