How Polarization Can Help Solve the Misinformation Problem — David Rand (Trustworthy Info Lecture)
November 30, 2022 — When discussing the ills afflicting social media, there is a great deal of concern about the role played by polarization. Among other negative consequences, dislike of counter-partisans — and political motivations more generally — have been suggested to promote belief in, and sharing of, misinformation.
While polarization may be part of the misinformation problem, here I will argue that political motivations are also essential for one of the only possibilities for identifying and combatting misinformation at scale — the wisdom of crowds. While professional fact-checkers play a critical role in countering misinformation, they are relatively few in number, and cannot possibly keep up with the vast amount of content posted on social media every day. Recent work has suggested that it is possible to supplement professional fact-checking by harnessing the wisdom of crowds: the ratings of fairly small politically-balanced groups of laypeople can generate high levels of agreement with professional fact-checkers.
A central challenge for conducting crowd-based evaluations at scale, however, is the problem of encouraging participation — why should people bother to flag misleading content? In this talk, I will argue that not wanting people to be exposed to posts by counter-partisans helps to solve this participation problem by motivating people to flag. Although extreme partisans would flag all counter-partisan content as misleading regardless of its actual truth value, such extreme partisans are rare. A much larger group of people care somewhat about both truth and partisanship, such that they would only be sufficiently motivated to flag when content is both misleading and counter-partisan. For these people, the partisan motivation is needed to drive participation — without any partisan motive they would flag nothing.
I will present data from survey experiments conducted on Lucid and observational analyses of data from Twitter's crowdsourced fact-checking program. Consistent with the theoretical predictions, the results demonstrate that:
* misleading counter-partisan content is flagged more than misleading co-partisan content,
* non-misleading content is rarely flagged, and
* more politically extreme users, rather than undermining the system, produce more and better flags.
Thus, crowdsourced misinformation identification may succeed because of, rather than in spite of, polarization and political motivations.
. . . . . . . . . . . . . . . . .
David Rand is the Erwin H. Schell Professor and a professor of management science and brain and cognitive sciences at MIT, an affiliate of the MIT Institute for Data, Systems, and Society, and the director of the Human Cooperation Laboratory and the Applied Cooperation Team.
Bridging the fields of cognitive science, behavioral economics, and social psychology, David’s research combines behavioral experiments run online and in the field with mathematical and computational models to understand people’s attitudes, beliefs, and choices. His work uses a cognitive science perspective grounded in the tension between more intuitive versus deliberative modes of decision-making. He focuses on illuminating why people believe and share misinformation and “fake news,” understanding political psychology and polarization, and promoting human cooperation.
David received his B.A. in computational biology from Cornell University in 2004 and his Ph.D. in systems biology from Harvard University in 2009, was a post-doctoral researcher in Harvard University’s Department of Psychology from 2009 to 2013, and was an assistant and then associate professor (with tenure) of psychology, economics, and management at Yale University prior to joining the faculty at MIT. His work has been published in peer-reviewed journals such Nature, Science, Proceedings of the National Academy of Science, the American Economic Review, Psychological Science, Management Science, and the American Journal of Political Science, and has received widespread attention from print, radio, TV and social media outlets. He has also written popular press articles for outlets including the New York Times, Wired, New Scientist, and the Psychological Observer.
David was named to Wired Magazine’s Smart List 2012 of “50 people who will change the world,” chosen as a 2012 Pop!Tech Science Fellow, received the 2015 Arthur Greer Memorial Prize for Outstanding Scholarly Research, was selected as fact-checking researcher of the year in 2017 by the Poyner Institute’s International Fact-Checking Network, and received the 2020 FABBS Early Career Impact Award from the Society for Judgment and Decision Making. Papers he has coauthored have been awarded best paper of the year in Experimental Economics, Social Cognition, and Political Methodology.
https://www.ischool.berkeley.edu/events/2022/how-polarization-can-help-solve-misinformation-problem
Видео How Polarization Can Help Solve the Misinformation Problem — David Rand (Trustworthy Info Lecture) канала Berkeley School of Information
While polarization may be part of the misinformation problem, here I will argue that political motivations are also essential for one of the only possibilities for identifying and combatting misinformation at scale — the wisdom of crowds. While professional fact-checkers play a critical role in countering misinformation, they are relatively few in number, and cannot possibly keep up with the vast amount of content posted on social media every day. Recent work has suggested that it is possible to supplement professional fact-checking by harnessing the wisdom of crowds: the ratings of fairly small politically-balanced groups of laypeople can generate high levels of agreement with professional fact-checkers.
A central challenge for conducting crowd-based evaluations at scale, however, is the problem of encouraging participation — why should people bother to flag misleading content? In this talk, I will argue that not wanting people to be exposed to posts by counter-partisans helps to solve this participation problem by motivating people to flag. Although extreme partisans would flag all counter-partisan content as misleading regardless of its actual truth value, such extreme partisans are rare. A much larger group of people care somewhat about both truth and partisanship, such that they would only be sufficiently motivated to flag when content is both misleading and counter-partisan. For these people, the partisan motivation is needed to drive participation — without any partisan motive they would flag nothing.
I will present data from survey experiments conducted on Lucid and observational analyses of data from Twitter's crowdsourced fact-checking program. Consistent with the theoretical predictions, the results demonstrate that:
* misleading counter-partisan content is flagged more than misleading co-partisan content,
* non-misleading content is rarely flagged, and
* more politically extreme users, rather than undermining the system, produce more and better flags.
Thus, crowdsourced misinformation identification may succeed because of, rather than in spite of, polarization and political motivations.
. . . . . . . . . . . . . . . . .
David Rand is the Erwin H. Schell Professor and a professor of management science and brain and cognitive sciences at MIT, an affiliate of the MIT Institute for Data, Systems, and Society, and the director of the Human Cooperation Laboratory and the Applied Cooperation Team.
Bridging the fields of cognitive science, behavioral economics, and social psychology, David’s research combines behavioral experiments run online and in the field with mathematical and computational models to understand people’s attitudes, beliefs, and choices. His work uses a cognitive science perspective grounded in the tension between more intuitive versus deliberative modes of decision-making. He focuses on illuminating why people believe and share misinformation and “fake news,” understanding political psychology and polarization, and promoting human cooperation.
David received his B.A. in computational biology from Cornell University in 2004 and his Ph.D. in systems biology from Harvard University in 2009, was a post-doctoral researcher in Harvard University’s Department of Psychology from 2009 to 2013, and was an assistant and then associate professor (with tenure) of psychology, economics, and management at Yale University prior to joining the faculty at MIT. His work has been published in peer-reviewed journals such Nature, Science, Proceedings of the National Academy of Science, the American Economic Review, Psychological Science, Management Science, and the American Journal of Political Science, and has received widespread attention from print, radio, TV and social media outlets. He has also written popular press articles for outlets including the New York Times, Wired, New Scientist, and the Psychological Observer.
David was named to Wired Magazine’s Smart List 2012 of “50 people who will change the world,” chosen as a 2012 Pop!Tech Science Fellow, received the 2015 Arthur Greer Memorial Prize for Outstanding Scholarly Research, was selected as fact-checking researcher of the year in 2017 by the Poyner Institute’s International Fact-Checking Network, and received the 2020 FABBS Early Career Impact Award from the Society for Judgment and Decision Making. Papers he has coauthored have been awarded best paper of the year in Experimental Economics, Social Cognition, and Political Methodology.
https://www.ischool.berkeley.edu/events/2022/how-polarization-can-help-solve-misinformation-problem
Видео How Polarization Can Help Solve the Misinformation Problem — David Rand (Trustworthy Info Lecture) канала Berkeley School of Information
Показать
Комментарии отсутствуют
Информация о видео
13 декабря 2022 г. 2:46:17
01:15:06
Другие видео канала
Putting Machine Learning into Production: An Overview — Srijith Rajamohan, DatabricksState of Data 2014: Data Science Teams in the Wild (DataEDGE 2014)I School Faculty Spotlight: Morgan AmesWhen Data Science Meets Design - Alan McConchie, Stamen Design (DataEDGE 2014)The I School in 2019: Where We’ve Been and Where We’re GoingRoundtable Discussion: Refusal of Surveillance Tech, Part 1 (April 12, 2021)Career Services: Networking TipsWhy your Big Data Initiative Sucks and What to do About it - DataEDGE 2015How to Scale AI-led Analytics — Umair Rauf (DataEDGE 2019)Toward Human-Centered Algorithmic Technologies (Min Kyung Lee)DataEDGE Conference: A new vision for data science — May 30--31, 2013WordSeer FeaturesInsight and Oversights: Shaping the Future of Visual Analytics with AI — Alvitta OttleySports Analytics and the Giants: Opportunities for Revenue Generation | DataEDGE 2016UC Berkeley School of Information Winter 2020 CommencementConstructing Experiments to Inform Business Innovation (DataEDGE 2014)Info 159/259. Natural Language ProcessingWomen in Data Science at UC Berkeley 2021: Data Science in ResearchTrainspotting and Predicting Train Delays | DataEDGE 2016Panel: Size Matters: Big Data, New Vistas in the Humanities and Social Sciences (DataEDGE 2012)At Scale and under Pressure: How Social Media Moderate, Choreograph, and Censor Public Discourse