Attacking Machine Learning: On the Security and Privacy of Neural Networks
Nicholas Carlini, Research Scientist, Google
Despite significant successes, machine learning has serious security and privacy concerns. This talk will examine two of these. First, how adversarial examples can be used to fool state-of-the-art vision classifiers (to, e.g., make self-driving cars incorrectly classify road signs). Second, how to extract private training data out of a trained neural network.Learning Objectives:1: Recognize the potential impact of adversarial examples for attacking neural network classifiers.2: Understand how sensitive training data can be leaked through exposing APIs to pre-trained models.3: Know when you need to deploy defenses to counter these new threats in the machine learning age.Pre-Requisites:Understanding of threats on traditional classifiers (e.g., spam or malware systems), evasion attacks, and privacy, as well as the basics of machine learning.
Видео Attacking Machine Learning: On the Security and Privacy of Neural Networks канала RSA Conference
Despite significant successes, machine learning has serious security and privacy concerns. This talk will examine two of these. First, how adversarial examples can be used to fool state-of-the-art vision classifiers (to, e.g., make self-driving cars incorrectly classify road signs). Second, how to extract private training data out of a trained neural network.Learning Objectives:1: Recognize the potential impact of adversarial examples for attacking neural network classifiers.2: Understand how sensitive training data can be leaked through exposing APIs to pre-trained models.3: Know when you need to deploy defenses to counter these new threats in the machine learning age.Pre-Requisites:Understanding of threats on traditional classifiers (e.g., spam or malware systems), evasion attacks, and privacy, as well as the basics of machine learning.
Видео Attacking Machine Learning: On the Security and Privacy of Neural Networks канала RSA Conference
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
On Evaluating Adversarial RobustnessSecurity and Privacy of Machine LearningAI & ML in Cyber Security - Why Algorithms are DangerousAdversarial Attacks on Neural Networks - Bug or Feature?The Five Most Dangerous New Attack Techniques and How to Counter ThemLecture 16 | Adversarial Examples and Adversarial TrainingSpiking Neural Networks for More Efficient AI AlgorithmsMachine Learning: Living in the Age of AI | A WIRED FilmTrustworthy AI: Adversarially (non-)Robust ML | Nicholas Carlini Google AI | AI FOR GOOD DISCOVERYThe Deep End of Deep Learning | Hugo Larochelle | TEDxBostonWebinar - Hacking AI: Security & Privacy of Machine Learning ModelsHow to Build an Effective API Security StrategyAdversarial Machine Learning explained! | With examples.Yang Zhang (CISPA), Quantifying Privacy Risks of Machine Learning ModelsML in Production: Serverless and Painless - Oliver Gindele, PhD | ODSC Europe 2019Adversarial Examples Are Not Bugs, They Are FeaturesVitaly Shmatikov, How to Salvage Federated LearningRecent Progress in Adversarial Robustness of AI Models: Attacks, Defenses, and CertificationBuilding Neural Network Models That Can Reason