This summary is provided by the IPR Digital Media Research Center
Dr. Jieun Shin and Dr. Sylvia Chan-Olmsted examined Americans’ perceptions of an artificial intelligence (AI) false news detector application called dEFEND. Researchers specifically examined respondents’ trust in the tool and their intent to use it.
National surveys of 1,000 U.S. adults were conducted in May 2021.
Key findings include:
1.) Trust was the strongest predictor in determining whether users would adopt the application.
2.) Younger respondents were more likely to trust the fake news detection application.
— There was no relationship found between education level and trust, or income level and trust.
3.) Trust levels were higher when users perceived the application to be highly competent at detecting fake news.
— Trust levels were also higher when the tool was more user-friendly.
4.) Individual factors played a key role in how much respondents trusted the fake news detector:
— Respondents who had prior use of a fact-checking tool were more likely to trust the application.
— Respondents who rated themselves favorably in their ability to detect false news were also more likely to trust the application.
— “Prior experience with AI technology” and “trust in AI technology in general” also made it more likely for participants to trust this application.
Read the full report here