Researcher(s)
- Varun Pappu, Computer Science, University of Delaware
Faculty Mentor(s)
- Matthew Mauriello, Computer and Information Sciences, University of Delaware
Abstract
Public concern over media-driven political polarization has sparked interest in how news articles influence perception, not only through emotional tone (sentiment), but also through how issues are framed. While sentiment analysis has been widely used to measure emotional intensity, it remains unclear whether sentiment alone explains perceived bias or whether framing (the inclusion or exclusion of multiple viewpoints) plays a more powerful role. This study addresses that gap by asking how variations in sentiment and ideological framing interact to shape the perceptions of bias, trustworthiness, and willingness to engage with political news content. Additionally, it explores how the disclosure of LLM-modified content influences users’ understanding and interpretation of that information. We also ask whether the presence of multiple perspectives reduces perceived polarization even when sentiment remains constant. Using a 2×2 between-subjects experimental design, we will expose around 180 U.S.-based participants to one of four article versions that vary in sentiment (neutral vs. extreme) and framing (balanced vs. one-sided). Articles are adapted from real news stories and transformed using a panel of large language models (GPT, Grok, Gemini, and Claude) and validated by both computational tools and human reviewers. Participants on MTurk then rated the articles on perceived bias, trust, agreement, and emotional response. Early analyses from a pilot study suggest that one-sided framing can drive perceptions of political bias, even in emotionally neutral articles, while balanced framing helps mitigate bias, regardless of tone. This work contributes to political communication research by offering a validated, replicable method for disentangling sentiment from framing. It also provides actionable insights for journalists, platform designers, and AI developers aiming to detect and reduce bias in news content.