Загрузка страницы
Информация о видео
22 января 2021 г. 20:07:40
00:42:46
Другие видео канала
Fatemeh Mireshghallah (UCSD)Fatemeh Mireshghallah (UCSD)Alina Oprea, Machine Learning Integrity and Privacy in Adversarial EnvironmentsAlina Oprea, Machine Learning Integrity and Privacy in Adversarial EnvironmentsJames Bell (Turing), Secure Single-Server Aggregation with (Poly)Logarithmic OverheadJames Bell (Turing), Secure Single-Server Aggregation with (Poly)Logarithmic OverheadYizheng Chen (U of Maryland), Continuous Learning for Android Malware DetectionYizheng Chen (U of Maryland), Continuous Learning for Android Malware DetectionAhmed Salem (Microsoft Research), Adversarial Exploration of Machine Learning Models' AccountabilityAhmed Salem (Microsoft Research), Adversarial Exploration of Machine Learning Models' AccountabilityPrivacy Linter and Opacus: Privacy Attacks and Differentially Private Training Open-Source LibrariesPrivacy Linter and Opacus: Privacy Attacks and Differentially Private Training Open-Source LibrariesJacob Steinhardt (UC Berkeley), The Science of Measurement in Machine LearningJacob Steinhardt (UC Berkeley), The Science of Measurement in Machine LearningGraham Cormode, Towards Federated Analytics with Local Differential PrivacyGraham Cormode, Towards Federated Analytics with Local Differential PrivacyJamie Hayes (DeepMind), Towards Transformation-Resilient Provenance DetectionJamie Hayes (DeepMind), Towards Transformation-Resilient Provenance DetectionNicolas Papernot, What does it mean for ML to be trustworthy?Nicolas Papernot, What does it mean for ML to be trustworthy?Eugene Bagdasaryan (Cornell Tech), Blind Backdoors in Deep LearningEugene Bagdasaryan (Cornell Tech), Blind Backdoors in Deep LearningNatasha Fernandes (UNSW), Quantitative Information Flow Refinement Orders and Application to DPNatasha Fernandes (UNSW), Quantitative Information Flow Refinement Orders and Application to DPDr. Sara Hooker (Google Brain), The myth of interpretable, robust, compact and high performance DNNsDr. Sara Hooker (Google Brain), The myth of interpretable, robust, compact and high performance DNNsMatthew Jagielski (Google Research), Some Results on Privacy and Machine UnlearningMatthew Jagielski (Google Research), Some Results on Privacy and Machine UnlearningShawn Shan, Security beyond Defenses: Protecting DNN systems via Forensics and RecoveryShawn Shan, Security beyond Defenses: Protecting DNN systems via Forensics and RecoveryBen Y. Zhao (University of Chicago), Adversarial Robustness via Forensics in Deep Neural NetworksBen Y. Zhao (University of Chicago), Adversarial Robustness via Forensics in Deep Neural NetworksFlorian Tramer (Google Brain)Florian Tramer (Google Brain)Giulia Fanti (CMU), Tackling Data Silos with Synthetic DataGiulia Fanti (CMU), Tackling Data Silos with Synthetic DataKonrad Rieck, Adversarial Preprocessing: Image-Scaling Attacks in Machine Learning.Konrad Rieck, Adversarial Preprocessing: Image-Scaling Attacks in Machine Learning.Tianhao Wang (University of Virginia), Continuous Release of Data Streams under Differential PrivacyTianhao Wang (University of Virginia), Continuous Release of Data Streams under Differential Privacy
Яндекс.Метрика