Towards Analyzing Online Communities of Problematic Information: A Computational Approach

Loading...
Thumbnail Image

Authors

Phadke, Shruti

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Problematic information - information that is inaccurate, misleading, inappropriately attributed, or altogether fabricated - prevails in digital societies and spaces. Communities formed around sharing, theorizing, or mobilizing problematic information can lead online users through the path of social distrust, paranoia, or radicalization. Not only that but ideas propagating through such communities spreading conspiracy theories or hateful ideologies can tear at the social fabric and threaten civility in online spaces. Despite the obvious disruptive consequences of online communities of problematic information, we don't have a large-scale, data-driven understanding of deeper social processes that are underway. In this dissertation, I contribute empirical insights into engagement, mobilization, and disengagement from communities of problematic information using theory-guided quantitative methods. My analysis of problematic online communities takes me across multiple social media platforms such as Reddit, Facebook, Twitter, and 4chan and various methodologies ranging from machine learning, and natural language processing to qualitative interviews. Equipped with these theories and methods, I analyze various instances of problematic information, such as conspiracy theories and hate movements in the West, and coordinated political amplification in India. Specifically, I investigate what makes people engage in conspiracy theory discussions, what are the mechanisms of information mobilization and content framing in online hate movements and political campaigns, and what are the ways in which people may leave online conspiracy theory discussions. From a data science perspective, my work uncovers how social media users engage with problematic online content and leverage technologies across platforms. From a social-psychological perspective, my research contributes to the literature on how various social, psychological, and cognitive processes motivate the users' journey through problematic information online. I showcase that studying large-scale digital traces contributes to the discussion on how thousands of online users self-select themselves into conspiracy theory discussion communities, how they take various pathways of engagement inside online conspiracy theory discussions, and how early signs of fracture in conspiracy worldviews result in eventual disengagement from conspiracy theory discussion communities. Moreover, looking into thousands of Facebook groups discussing white supremacy, and anti-LGBTQ ideas, this research also reveals how problematic information is mobilized through accounts playing various social roles within the hate movement. In my ongoing work, I leverage the multidisciplinary understanding from my doctoral research to design online prompts for intervening in online interactions surrounding climate change denial conspiracy theories. More broadly, I am excited about continuing this work by exploring ethical, fair, and effective ways of intervening in open online discussions to reduce problematic content online.

Description

Thesis (Ph.D.)--University of Washington, 2023

Citation

DOI