Rand is helping fight the spread of disinformation through its Countering Truth Decay initiative. For this initiative, researchers have developed a database of online tools to help information consumers, researchers, and journalists navigate an increasingly difficult information environment. Rand summarized its goals in three parts:
- To identify and collect a set of resources in one place that can help users confront disinformation, gain greater awareness of the media ecosystem, and become more-savvy information media consumers
- To inform funders and developers about the set of tools currently under development, those tools in need of funding, and areas where additional development would be beneficial
- To provide a map of ongoing projects and developed tools that could serve as an input to efforts to build a field around the study of disinformation and its remedies
What are anti-disinformation tools? What do they look like?
Anti-disinformation tools challenge disinformation by verifying, fact-checking, and analyzing information online. Websites powered by human fact-checkers to apps using artificial intelligence can detect bots and other false information. Rand’s initiative focuses on the U.S. market and only includes tools created by civil society organizations and nonprofits.
Bot/spam detection tools identify automated accounts on social media platforms. Examples of these tools are:
- Bot Sentinel: A free platform developed to detect and track trollbots and untrustworthy Twitter accounts. Learn more here.
- Botcheck.me: A browser extension that utilizes machine learning to identify political propaganda bots on twitter. Learn more here.
- Botometer: A web-based program that uses machine learning to classify Twitter accounts as bot or human. Learn more here.
- Hoaxy (Observatory on Social Media): A web-based tool that visualizes the spread of articles online. Learn more here.
Codes and standards of new norms, principles, or best practices to govern a set of processes or to guide conduct and behavior can be applied to all tools. A few of the teams and initiatives that exist to establish new codes and standards are:
Credibility scoring: This tool attaches ratings or grades to individual sources based on their accuracy, transparency, quality, and other measures of trustworthiness. Several credibility scoring tools exist, including but not limited to:
Disinformation tracking studies the flow and prevalence of disinformation. Some examples include:
Education/training includes interactive courses, games, and activities aimed at combating disinformation by teaching people new skills or concepts. Several of these courses exist, some include:
- First Draft Verification Curriculum
- Factitious
- Web Literacy
- Project Look Sharp Media Literacy Curriculum
Verification as applied to fact-checking tools that seek to ascertain the accuracy of information or the authenticity of photos and videos. Examples of verification tools are:
Whitelisting tools create trusted lists of IP addresses or websites to distinguish between trusted users or trusted sites and ones that may be fake or malicious.
RAND’s Truth Decay initiative is characterized by four trends: increasing disagreement about facts and data, blurring of the line between opinion and fact, increasing relative volume of opinion compared to fact, and declining trust in institutions that used to be looked to as authoritative sources of factual information. The diminishing role of facts, data, and analysis in political and civil discourse and the policymaking process (how RAND defines Truth Decay), threatens democracy, policymaking, and the ability to communicate civilly with each other.
To learn more about RAND’s research on disinformation, visit here.