Matthew Jagielski (Google Research), Some Results on Privacy and Machine Unlearning
https://prisec-ml.github.io
Видео Matthew Jagielski (Google Research), Some Results on Privacy and Machine Unlearning канала Privacy and Security in ML Interest Group
Видео Matthew Jagielski (Google Research), Some Results on Privacy and Machine Unlearning канала Privacy and Security in ML Interest Group
Показать
Комментарии отсутствуют
Информация о видео
5 октября 2022 г. 20:32:12
00:49:27
Другие видео канала
![Fatemeh Mireshghallah (UCSD)](https://i.ytimg.com/vi/JwoJpJoNGrg/default.jpg)
![Alina Oprea, Machine Learning Integrity and Privacy in Adversarial Environments](https://i.ytimg.com/vi/ihp2NMIuJD4/default.jpg)
![James Bell (Turing), Secure Single-Server Aggregation with (Poly)Logarithmic Overhead](https://i.ytimg.com/vi/_NzClquIipo/default.jpg)
![Yizheng Chen (U of Maryland), Continuous Learning for Android Malware Detection](https://i.ytimg.com/vi/akt88Vu-Ytk/default.jpg)
![Ahmed Salem (Microsoft Research), Adversarial Exploration of Machine Learning Models' Accountability](https://i.ytimg.com/vi/rM3mjAgv2Q8/default.jpg)
![Privacy Linter and Opacus: Privacy Attacks and Differentially Private Training Open-Source Libraries](https://i.ytimg.com/vi/jfcRPRIwPW0/default.jpg)
![Jacob Steinhardt (UC Berkeley), The Science of Measurement in Machine Learning](https://i.ytimg.com/vi/DqT9S2gl8iU/default.jpg)
![Graham Cormode, Towards Federated Analytics with Local Differential Privacy](https://i.ytimg.com/vi/cCEgFNtWG5E/default.jpg)
![Jamie Hayes (DeepMind), Towards Transformation-Resilient Provenance Detection](https://i.ytimg.com/vi/MAlpel4M2GY/default.jpg)
![Nicolas Papernot, What does it mean for ML to be trustworthy?](https://i.ytimg.com/vi/26uls-FLAfA/default.jpg)
![Eugene Bagdasaryan (Cornell Tech), Blind Backdoors in Deep Learning](https://i.ytimg.com/vi/TCBPX3CA5UQ/default.jpg)
![Natasha Fernandes (UNSW), Quantitative Information Flow Refinement Orders and Application to DP](https://i.ytimg.com/vi/hdcg1_Fyexw/default.jpg)
![Dr. Sara Hooker (Google Brain), The myth of interpretable, robust, compact and high performance DNNs](https://i.ytimg.com/vi/BZ3FDiXkP78/default.jpg)
![Shawn Shan, Security beyond Defenses: Protecting DNN systems via Forensics and Recovery](https://i.ytimg.com/vi/20z4l824UG4/default.jpg)
![Ben Y. Zhao (University of Chicago), Adversarial Robustness via Forensics in Deep Neural Networks](https://i.ytimg.com/vi/2NVpffuHnJI/default.jpg)
![Florian Tramer (Google Brain)](https://i.ytimg.com/vi/44kTB1EJPmE/default.jpg)
![Vitaly Shmatikov, How to Salvage Federated Learning](https://i.ytimg.com/vi/Tx2T3LojHIQ/default.jpg)
![Giulia Fanti (CMU), Tackling Data Silos with Synthetic Data](https://i.ytimg.com/vi/dnFa92daKxA/default.jpg)
![Konrad Rieck, Adversarial Preprocessing: Image-Scaling Attacks in Machine Learning.](https://i.ytimg.com/vi/kCKayHjZd3E/default.jpg)
![Tianhao Wang (University of Virginia), Continuous Release of Data Streams under Differential Privacy](https://i.ytimg.com/vi/RhSDyF953X8/default.jpg)