Dr. Sara Hooker (Google Brain), The myth of interpretable, robust, compact and high performance DNNs
Комментарии отсутствуют
Информация о видео
11 ноября 2021 г. 9:48:22
00:41:29
Другие видео канала
Fatemeh Mireshghallah (UCSD)Alina Oprea, Machine Learning Integrity and Privacy in Adversarial EnvironmentsJames Bell (Turing), Secure Single-Server Aggregation with (Poly)Logarithmic OverheadYizheng Chen (U of Maryland), Continuous Learning for Android Malware DetectionAhmed Salem (Microsoft Research), Adversarial Exploration of Machine Learning Models' AccountabilityAmrita Roy Chowdhury (UCSD), EIFFeL: Ensuring Integrity for Federated LearningPrivacy Linter and Opacus: Privacy Attacks and Differentially Private Training Open-Source LibrariesJacob Steinhardt (UC Berkeley), The Science of Measurement in Machine LearningGraham Cormode, Towards Federated Analytics with Local Differential PrivacyJamie Hayes (DeepMind), Towards Transformation-Resilient Provenance DetectionNicolas Papernot, What does it mean for ML to be trustworthy?Eugene Bagdasaryan (Cornell Tech), Blind Backdoors in Deep LearningNatasha Fernandes (UNSW), Quantitative Information Flow Refinement Orders and Application to DPMatthew Jagielski (Google Research), Some Results on Privacy and Machine UnlearningShawn Shan, Security beyond Defenses: Protecting DNN systems via Forensics and RecoveryBen Y. Zhao (University of Chicago), Adversarial Robustness via Forensics in Deep Neural NetworksFlorian Tramer (Google Brain)Vitaly Shmatikov, How to Salvage Federated LearningGiulia Fanti (CMU), Tackling Data Silos with Synthetic DataKonrad Rieck, Adversarial Preprocessing: Image-Scaling Attacks in Machine Learning.