Tin Nguyen: "Sensitivity of MCMC-based analyses to small-data removal"
Talk Title: Sensitivity of MCMC-based analyses to small-data removal
Thesis Committee: Tamara Broderick, Ashia Wilson, and Stefanie Jegelka
Talk Abstract: If the conclusion of a data analysis is sensitive to dropping very few data points, that conclusion might hinge on the particular data at hand rather than representing a more broadly applicable truth. How could we check whether this sensitivity holds? One idea is to consider every small subset of data, drop it from the dataset, and re-run our analysis. But running MCMC to approximate a Bayesian posterior is already very expensive; running multiple times is prohibitive, and the number of re-runs needed here is combinatorially large. Recent work proposes a fast and accurate approximation to find the worst-case dropped data subset, but that work was developed for problems based on estimating equations --- and does not directly handle Bayesian posterior approximations using MCMC. We make two principal contributions in the present work. We adapt the existing data-dropping approximation to estimators computed via MCMC. Observing that Monte Carlo errors induce variability in the approximation, we use a variant of the bootstrap to quantify this uncertainty. We demonstrate how to use our approximation in practice to determine whether there is non-robustness in a problem. Empirically, our method is accurate in simple models, such as linear regression. In models with complicated structure, such as hierarchical models, the performance of our method is mixed.
Видео Tin Nguyen: "Sensitivity of MCMC-based analyses to small-data removal" канала Tamara Broderick
Thesis Committee: Tamara Broderick, Ashia Wilson, and Stefanie Jegelka
Talk Abstract: If the conclusion of a data analysis is sensitive to dropping very few data points, that conclusion might hinge on the particular data at hand rather than representing a more broadly applicable truth. How could we check whether this sensitivity holds? One idea is to consider every small subset of data, drop it from the dataset, and re-run our analysis. But running MCMC to approximate a Bayesian posterior is already very expensive; running multiple times is prohibitive, and the number of re-runs needed here is combinatorially large. Recent work proposes a fast and accurate approximation to find the worst-case dropped data subset, but that work was developed for problems based on estimating equations --- and does not directly handle Bayesian posterior approximations using MCMC. We make two principal contributions in the present work. We adapt the existing data-dropping approximation to estimators computed via MCMC. Observing that Monte Carlo errors induce variability in the approximation, we use a variant of the bootstrap to quantify this uncertainty. We demonstrate how to use our approximation in practice to determine whether there is non-robustness in a problem. Empirically, our method is accurate in simple models, such as linear regression. In models with complicated structure, such as hierarchical models, the performance of our method is mixed.
Видео Tin Nguyen: "Sensitivity of MCMC-based analyses to small-data removal" канала Tamara Broderick
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Nicholas Bonaker: "Nomon: A Flexible, Bayesian Interface for Motor-Impaired Users"](https://i.ytimg.com/vi/eIm5G1sqHAM/default.jpg)
![MIT: Machine Learning 6.036, Lecture 9: State machines and Markov decision processes (Fall 2020)](https://i.ytimg.com/vi/O5hkIfUR_uA/default.jpg)
![MIT: Machine Learning 6.036, Lecture 7: Brief intermission (Fall 2020)](https://i.ytimg.com/vi/QJY342H8LOo/default.jpg)
![MIT: Machine Learning 6.036, Lecture 8: Convolutional neural networks (Fall 2020)](https://i.ytimg.com/vi/sFEZ180RvTI/default.jpg)
![Tamara Broderick: "Approximate Cross-Validation for Complex Models"](https://i.ytimg.com/vi/pZ7XojW0pMo/default.jpg)
![Tamara Broderick: "Toward a taxonomy of trust for probabilistic data analysis"](https://i.ytimg.com/vi/3TEo-ccilJM/default.jpg)
![Lorenzo Masoero: "Predicting and maximizing genomic variety discovery via Bayesian nonparametrics"](https://i.ytimg.com/vi/OQzyWf-OXK4/default.jpg)
![Lorenzo Masoero: "Bayesian nonparametrics for maximizing power in rare variants association studies"](https://i.ytimg.com/vi/aOim4H1N5tQ/default.jpg)
![Soumya Ghosh: "Approximate Cross-Validation for Structured Models"](https://i.ytimg.com/vi/z0fXHnAvm9E/default.jpg)
![Brian Trippe: "Advances in Bayesian Linear Modeling in High Dimensions"](https://i.ytimg.com/vi/0OE5OHOGp4U/default.jpg)
![MIT: Machine Learning 6.036, Lecture 11: Recurrent neural networks (Fall 2020)](https://i.ytimg.com/vi/JZ0WVuk4mt8/default.jpg)
![William Stephenson: "Can we globally optimize cross-validation loss?"](https://i.ytimg.com/vi/NCHvUGBgoa0/default.jpg)
![MIT: Machine Learning 6.036, Lecture 6: Neural networks (Fall 2020)](https://i.ytimg.com/vi/N0uGnAfXUes/default.jpg)
![Brian Trippe: "Bayes Estimates for Multiple Related Regressions" (JSM 2020)](https://i.ytimg.com/vi/Ng2HN3zLW10/default.jpg)
![Raj Agrawal: "High-Dimensional Variable Selection & Nonlinear Interaction Discovery in Linear Time"](https://i.ytimg.com/vi/LCoarJzbtFo/default.jpg)
![Soumya Ghosh: "Are you using test log-likelihood correctly?"](https://i.ytimg.com/vi/Sq5hYtUWABo/default.jpg)
![MIT: Machine Learning 6.036, Lecture 4: Logistic regression (Fall 2020)](https://i.ytimg.com/vi/abB3fwfPy14/default.jpg)
![MIT: Machine Learning 6.036, Lecture 2: Perceptrons (Fall 2020)](https://i.ytimg.com/vi/A3obZhw7Jc4/default.jpg)
![MIT: Machine Learning 6.036, Lecture 12: Decision trees and random forests (Fall 2020)](https://i.ytimg.com/vi/ZOiBe-nrmc4/default.jpg)
![MIT: Machine Learning 6.036, Lecture 1: Basics (Fall 2020)](https://i.ytimg.com/vi/0xaLT4Svzgo/default.jpg)