Boosting via digital environments

Examples

Boosts
© honzahruby, licensed at Adobe Stock

Simple decision trees to assist in judging the trustworthiness of information online

What is the boost?

Epistemic cues (e.g., the comments on an online article or the references it cites) indicate the quality of information online. These cues are often different than classic, offline indicators for quality, such as the publisher, who plays less of a role online than offline. These cues can be effectively summarized by simple decision trees that teach users a systematic way to check information online.

How does the boost work?

A fast-and-frugal decision tree online—for instance, one that pops up next to a fact-checking label—lists crucial epistemic cues in order of their importance. People who encounter them can eventually grow accustomed to systematically checking for these cues when evaluating the quality of a piece of online information.

Which competences does the boost foster?

Judging the trustworthiness of information online.

Which challenges does the boost tackle?

False and misleading information and the difficulties of scaling up quality control done by human fact-checkers who review content post hoc (e.g., snopes.com).

What is the evidence behind the boost?

This boost has not yet been directly tested in an online environment. See this other boost (“Simple decision trees to judge the trustworthiness of information online”) for more information on both which epistemic cues users should be alerted to, as well, as evidence for the general effectiveness of simple, fast-and-frugal decision trees.

How is the boost implemented?

This intervention can be implemented either by a platform showing the decision tree as a pop-up, or as an independent external, tool (e.g., a browser add-on). Fact-checking organizations could also use decision trees to make their processes more transparent.

Key reference

Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R., & Hertwig, R. (2020). How behavioural sciences can promote truth, autonomy and democratic discourse online. Nature Human Behaviour, 4, 1102—1109. https://doi.org/10.1038/s41562-020-0889-7

Short lists of tips to assist in judging the trustworthiness of information online

What is the boost

This boost takes the form of a simple list of tips to assist users in judging the trustworthiness of information online (much like the fast-and-frugal decision trees above).

How does the boost work?

Simple tips, such as “Be skeptical of headlines. False news stories often have catchy headlines in all caps with exclamation points. If shocking claims in the headline sound unbelievable, they probably are.”, appear directly in the online environment itself (e.g., pop-ups in a browser) so people can apply them directly to what they are seeing. The tips are easy to understand and remember, encouraging people to be more critical about what they see online.

Which competences does the boost foster?

Judging the trustworthiness of information online.

Which challenges does the boost tackle?

False and misleading information and the difficulties of scaling up quality control done by human fact-checkers who review content post hoc (e.g., snopes.com).

What is the evidence behind the boost?

A simple-list intervention was tested in a field experiment on Facebook and its effects were measured in the United States and India. It made people more skeptical towards false news headlines; in the U.S. sample, these effects persisted for several weeks.

How is the boost implemented?

This boost does not necessarily require the cooperation of an online platform as it can be run as a public service campaign, but ideally it would be integrated directly into news feeds.

Key reference

Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences, 117, 15536–15545. https://doi.org/10.1073/pnas.1920498117

Network pop-ups: Visualizing how information has traveled online

What is the boost?

The way information travels on social media networks is a complex, self-organized process. The original source of information and the path it took can be difficult—if not impossible—to find. A network pop-up traces this path all the way back to the beginning and visualizes the involved actors in a network representation.

How does the boost work?

A network pop-up next to, for instance, a viral tweet, shows the history of a post on social media: how it spread and where it came from (see figure below for an example and see tracemap.info for a proof of concept, see also tracemap’s video embedded below).

Which competences does the boost foster?

Understanding the information landscape on social networks and how it changes. Routinely checking for the social history of a message and spotting inauthentic behavior (e.g., bots).

Which challenges does the boost tackle?

Radical or fringe opinions going viral, mindless sharing of posts, and inauthentic amplification.

What is the evidence behind the boost?

This boost has not yet been directly tested. However, there are two findings that suggest that support the idea behind network pop-ups: The shape (breadth vs. width) of a sharing cascade is predictive for its quality, and crowd-sourced fact checking can match the quality of third party fact-checking (Pennycook & Rand, 2019; Resnick, Alfayez, & Gilbert, 2021).

How is the boost implemented?

Network pop-ups can be implemented by a platform, or as an independent external tool (e.g., a browser add-on).

Key reference

Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R., & Hertwig, R. (2020). How behavioural sciences can promote truth, autonomy and democratic discourse online. Nature Human Behaviour, 4, 1102–1109. https://doi.org/10.1038/s41562-020-0889-7

Example of visualizing the sharing cascade of a tweet using tracemap.info.

Transparent and interactive news feed design

What is the boost?

The news feed plays an important role in how people consume and share information. The algorithms that curate news feeds guide users through the information landscape; if designed transparently and allow the user to interact with the sorting algorithm, news feed algorithms can increase users' autonomy.

How does the boost work?

The criteria for determining how information is sorted in a news feed are displayed to users and users can change the importance of the different criteria (see figure below). People can thereby not only observe how the ranking of articles and posts changes under different settings, but also set their personal preferences clearly, without having to rely on inscrutable algorithms.

Which competences does the boost foster?

Understanding the logic of content ranking and curation in an online news feed, as well as setting one’s own preferences (e.g., towards more news-oriented or more personal information).

Which challenges does the boost tackle?

Blind reliance on inscrutable algorithms (which tend to prioritize popular and emotional content) for curating and sorting the news people consume.

What is the evidence behind the boost?

This boost has not yet been directly tested. However, experiments have shown that click-based curation can promote emotional, moral, or low quality information; a rule-based algorithm with other factors than popularity and recency for ranking could mitigate these effects.

How is the boost implemented?

Implementation would require the cooperation of platforms, which would have to replace their news feed algorithms with rule-based versions, as well as change their interface.

Key reference

Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C.R., & Hertwig, R. (2020). How behavioural sciences can promote truth, autonomy and democratic discourse online. Nature Human Behaviour, 4, 1102–1109. https://doi.org/10.1038/s41562-020-0889-7

Example of a transparently organized news feed on social media. Types of content are clearly distinguished, sorting criteria and their values are shown with every post, and users can adjust weightings. Based on Figure 3 in [Lorenz-Spreen et al. (2020)](https://doi.org/10.1038/s41562-020-0889-7).
Example of a transparently organized news feed on social media. Types of content are clearly distinguished, sorting criteria and their values are shown with every post, and users can adjust weightings. Based on Figure 3 in Lorenz-Spreen et al. (2020).