Загрузка страницы

Detecting Network Effects: Randomizing Over Randomized Experiments

Detecting Network Effects: Randomizing Over Randomized Experiments

Martin Saveski (MIT)
Jean Pouget-Abadie (Harvard University)
Guillaume Saint-Jacques (MIT)
Weitao Duan (LinkedIn)
Souvik Ghosh (LinkedIn)
Ya Xu (LinkedIn)
Edo Airoldi (Harvard University)

Randomized experiments—A/B tests—are the standard approach for evaluating the effect of new product features. They rely on the “stable treatment value assumption” (SUTVA) which states that treatment only affects treated users and does not spill over to their friends. Violations of SUTVA, common in features that exhibit network effects, result in inaccurate estimates of the treatment effect. In this paper, we leverage a new experimental design for testing whether SUTVA holds, without making any assumptions on how treatment effects may spill over between the treatment and the control group. We do so by simultaneously running completely randomized and cluster-based randomized experiments and comparing the difference of resulting estimates, detailing known theoretical bounds on the Type I error rate. We provide practical guidelines for implementing this design on large-scale experimentation platforms. Finally, we deploy this design to LinkedIn’s experimentation platform and apply it to two online experiments, highlighting the presence of network effects and bias in standard A/B testing approaches in a “real-world” setting.

More on http://www.kdd.org/kdd2017/

Видео Detecting Network Effects: Randomizing Over Randomized Experiments канала KDD2017 video
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
28 июня 2017 г. 12:56:18
00:03:45
Яндекс.Метрика