Targeted Adversarial Examples for Black Box Audio Systems
Targeted Adversarial Examples for Black Box Audio Systems
Amog Kamsetty (UC Berkeley)
Presented at the
2nd Deep Learning and Security Workshop
May 23, 2019
at the 2019 IEEE Symposium on Security & Privacy
San Francisco, CA
https://www.ieee-security.org/TC/SP2019/
https://www.ieee-security.org/TC/SPW2019/DLS/
ABSTRACT
The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition (ASR) systems. Many have demonstrated that small adversarial perturbations can fool deep neural networks into incorrectly predicting a specified target with high confidence. Current work on fooling ASR systems have focused on white-box attacks, in which the model architecture and parameters are known. In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task. We achieve a 89.25% targeted attack similarity, with 35% targeted attack success rate, after 3000 generations while maintaining 94.6% audio file similarity.
Видео Targeted Adversarial Examples for Black Box Audio Systems канала IEEE Symposium on Security and Privacy
Amog Kamsetty (UC Berkeley)
Presented at the
2nd Deep Learning and Security Workshop
May 23, 2019
at the 2019 IEEE Symposium on Security & Privacy
San Francisco, CA
https://www.ieee-security.org/TC/SP2019/
https://www.ieee-security.org/TC/SPW2019/DLS/
ABSTRACT
The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition (ASR) systems. Many have demonstrated that small adversarial perturbations can fool deep neural networks into incorrectly predicting a specified target with high confidence. Current work on fooling ASR systems have focused on white-box attacks, in which the model architecture and parameters are known. In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task. We achieve a 89.25% targeted attack similarity, with 35% targeted attack success rate, after 3000 generations while maintaining 94.6% audio file similarity.
Видео Targeted Adversarial Examples for Black Box Audio Systems канала IEEE Symposium on Security and Privacy
Показать
Комментарии отсутствуют
Информация о видео
26 сентября 2019 г. 22:22:29
00:21:55
Другие видео канала
Model Stealing Attacks Against Inductive Graph Neural NetworksFinding and Exploiting CPU Features using MSR TemplatingMulti-Server Verifiable Computation of Low-Degree PolynomialsThe State of the SameSite: Studying the Usage, Effectiveness, and Adequacy of SameSite CookiesARBITRAR : User-Guided API Misuse DetectionImproving Password Guessing via Representation LearningElectromagnetic Covert Channels Can Be Super ResilientSGUARD: Smart Contracts Made Vulnerability-FreeHark: A Deep Learning System for Navigating Privacy Feedback at Scale"Adversarial Examples" for Proof-of-LearningJigsaw: Efficient and Scalable Path Constraints FuzzingIntriguing Properties of Adversarial ML Attacks in the Problem SpaceA First Look at ZoombombingSphinx: Enabling Privacy-Preserving Online Learning over the CloudBlack Widow: Blackbox Data-driven Web ScanningSNARKBlock: Federated Anonymous Blocklisting from Hidden Common Input Aggregate ProofsAn I/O Separation Model for Formal Verification of Kernel ImplementationsAdversarial Prefetch: New Cross-Core Cache Side Channel AttacksQuantifying Blockchain Extractable Value: How dark is the forest?Lightweight Techniques for Private Heavy HittersRT-TEE: Real-time System Availability for Cyber-physical Systems using ARM TrustZone