This summary is provided by the IPR Digital Media Research Center
Dr. Myojung Chung and colleagues analyzed how in-group vs. out-group social identities (in this case, political affiliation) affects how readers respond to political fact-checking messages written by a human or AI-centered source.
The researchers conducted an experiment with 645 U.S.-based individuals asking them to read a social post with misinformation about Democrats or Republicans. Participants then viewed a subsequent fact-checking message correcting the misinformation from one of four sources (human experts vs. AI vs. crowdsourcing vs human experts-AI hybrid) and then completed a survey. This study also tested motivated reasoning or the respondent’s “attempt to maintain their own opinions by avoiding or ignoring new information that threatened their own beliefs.”
Key Findings include:
1.) Democrat or Republican respondents rated fact-checking messages that corrected negative information about the other party (e.g., the out-group) as “less credible.” In other words, individuals tend to reject corrective information that challenges their political mental model
— If partisans evaluate a fact-checking message by human experts as accurate and trustworthy, what matters more to them is whether they feel the fact-checking source is supporting their political goals or not
2.) When fact-checking messages were presented as either fully or partially “AI generated,” respondents had significantly lower levels of motivated reasoning compared to when messages were fully human-generated
3.) AI and crowdsourcing source labels significantly reduced motivated reasoning in evaluating the credibility of fact-checking messages whereas the partisan bias remained evident for the human experts and human experts-AI hybrid source labels
4.) Fact-checking messages from human experts was the most trustworthy source overall, but respondents were willing to disregard the messages if it didn’t say what they wanted to hear.
Read the original study here