Counterfactual Fairness: Matt Kusner, The Alan Turing Institute
Dr Kusner is a Research Fellow at The Alan Turing Institute. He was previously a visiting researcher at Cornell University, under the supervision of Kilian Q Weinberger, and received his PhD in Machine Learning from Washington University in St Louis. His research is in the areas of counterfactual fairness, privacy, budgeted learning, model compression and Bayesian optimisation.
Talk title: Counterfactual Fairness
Synopsis: Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. Matt will present a framework for modelling fairness using tools from causal inference. This definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. The framework is demonstrated on a real-world problem of fair prediction of success in law school.
#aiattheturing
Видео Counterfactual Fairness: Matt Kusner, The Alan Turing Institute канала The Alan Turing Institute
Talk title: Counterfactual Fairness
Synopsis: Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. Matt will present a framework for modelling fairness using tools from causal inference. This definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. The framework is demonstrated on a real-world problem of fair prediction of success in law school.
#aiattheturing
Видео Counterfactual Fairness: Matt Kusner, The Alan Turing Institute канала The Alan Turing Institute
Показать
Комментарии отсутствуют
Информация о видео
28 февраля 2018 г. 20:44:17
00:46:22
Другие видео канала
Making Decisions under Model Misspecification & Star-shaped Risk Measures - Maccheroni & MarinacciBreaking the code: Alan Turing's legacy in 2021Machine learning: from black boxes to white boxes - Mihaela van der SchaarCounterfactual FairnessExplaining Machine Learning Classifiers through Diverse Counterfactual ExplanationsWhat is causal inference, and why should data scientists know? by Ludvig HultAAAI 2021 Tutorial on Explaining Machine Learning PredictionsUnifying the Counterfactual and Graphical Approaches to CausalityLearning Representations Using Causal Invariance - Leon BottouWhat is data science?Robustness of G-Expectation under Knightian Uncertainty - Prof. Shige PengCouncil animation decision making (EN)Can learning theory resist deep learning? Francis Bach, INRIAJohn M. Keynes and Treatise on Probability - Prof. Simon BlackburnProf. Frank Riedel - Frank Knight, the Economics of Uncertainty, and 21st Century FinanceThe Turing Lectures: with Rene VidalDifference-in-Differences method for policy evaluationDealing with Bias and Fairness in Building Data Science/ML/AI Systems: A Hands-on TutorialHate Speech: Measures & Counter-measures - Helen Margetts