Start Submission Become a Reviewer

Reading: Believe It or Not – No Support for an Effect of Providing Explanatory or Threat-Related Info...

Download

A- A+
Alt. Display

Research article

Believe It or Not – No Support for an Effect of Providing Explanatory or Threat-Related Information on Conspiracy Theories’ Credibility

Authors:

Marcel Meuer ,

Department of Psychology, University of Mainz, DE
X close

Aileen Oeberst,

Department of Psychology, University of Hagen, DE
X close

Roland Imhoff

Department of Psychology, University of Mainz, DE
X close

Abstract

Past research suggests that certain content features of conspiracy theories may foster their credibility. In two experimental studies (N = 293), we examined whether conspiracy theories that explicitly offer a broad explanation for the respective phenomena and/or identify a potential threat posed by conspirators are granted more credibility than conspiracy theories lacking such information. Furthermore, we tested whether people with a pronounced predisposition to believe in conspiracies are particularly susceptible to such information. To this end, participants judged the credibility of four conspiracy theories that varied in the provision of explanatory and threat-related information. Interestingly, the specific type of information provided was not decisive. Instead, credibility judgments were only driven by people’s predisposition to believe in conspiracies. Findings suggest that there is no mechanistic, almost automatic effect of merely adding specific information and highlight the relevance of people’s conspiratorial mindset for the evaluation of conspiracy theories.

How to Cite: Meuer, M., Oeberst, A., & Imhoff, R. (2021). Believe It or Not – No Support for an Effect of Providing Explanatory or Threat-Related Information on Conspiracy Theories’ Credibility. International Review of Social Psychology, 34(1), 26. DOI: http://doi.org/10.5334/irsp.587
Handling Editor:
Marie-Pierre Fayant
Paris Descartes University, FR
X close
83
Views
34
Downloads
7
Twitter
  Published on 19 Oct 2021
 Accepted on 19 Aug 2021            Submitted on 25 Feb 2021

‘Everyone loves a conspiracy’, Dan Brown writes in his world-famous novel The Da Vinci Code (2003: 169), and its sales numbers prove him right (Wyatt, 2005). This observation seems to apply not only to fiction but also to the real world, where conspiracy theories—explanations of past or current phenomena, which accuse a group of powerful individuals of acting in secret to achieve selfish, malevolent goals (Imhoff & Lamberty, 2020b)—are widespread and cover a wide range of topics (e.g., politics, economy, science; Bowman & Rugg, 2013; Graumann & Moscovici, 1987; Oliver & Wood, 2014a). Because believing in conspiracy theories is associated with several negative health-related, political, and social attitudes and behaviors (e.g., non-adherence to medical treatments, refusing to vote, willingness to use political violence; Bogart & Thorburn, 2005; Bogart et al., 2010; Imhoff et al., 2021; Oliver & Wood, 2014b; Uscinski & Parent, 2014), there has been great scholarly interest in psychological factors that determine belief in conspiracy theories (Douglas et al., 2019). So far, recent research put a lot of focus on features of individuals who believe in conspiracy theories, such as relevant states (e.g., feelings of uncertainty, lack of control; Sullivan et al., 2010; Van Prooijen & Jostman, 2013), traits (e.g., anxiety, paranoid ideation; Grzesiak-Feldman, 2013; Imhoff & Lamberty, 2018), and cognitive processes (e.g., illusory pattern perception, hypersensitive agency detection; Douglas et al., 2016; Van Prooijen et al., 2018). In this regard, a stable, individual tendency to believe in conspiracy theories has been proposed (Dagnall et al., 2015; Imhoff & Bruder, 2014).

Yet, although opinion formation theories suggest that ‘every opinion is a marriage of information and predisposition’ (Zaller, 1992: 6), only a handful of studies has drawn scholarly attention to the properties of conspiracy theories themselves (Bost & Prunier, 2013; Gebauer et al., 2016; Lyons et al., 2019; Mirabile & Horne, 2019; Raab et al., 2013; Uscinski et al., 2016). For instance, Bost and Prunier (2013) found that conspiracy theories were more credible when they provided a strong motive of the conspirator, and Gebauer et al. (2016) demonstrated that people with strong conspiracy beliefs are particularly susceptible to certain informational cues (i.e., information on direct causality and strong purposeful intentions). Because people are constantly exposed to a large amount of information of differing degrees of validity (Bohn & Short, 2012), this content-centered perspective on conspiracy theories might be a worthwhile complement to the hitherto predominant person-centered focus. Ultimately, this perspective may offer strategies for authorities and media agencies to improve their coverage of phenomena for which conspiracy theories may arise. The present paper builds on this line of research and examines whether two further types of information that are often part of conspiracy theories—explanatory and threat-related information—affect their credibility.

Explanatory Information

People have a fundamental drive to uncover the causal relations of their world (Gopnik, 2000; Lombrozo, 2006), and understanding these relations enables people to anticipate future consequences and shape their behavior accordingly (Hagmayer & Sloman, 2009). One reason why people are drawn to conspiracy theories might be that they, at least to some extent, feature characteristics typical of good explanations (Keeley, 1999). For instance, in a study by Mirabile and Horne (2019), between one third and one half of all participants agreed that the presented conspiracy theories offered explanations that qualify as broad, coherent, and simple (depending on the respective characteristic).

When it comes to events for which multiple explanations exist, people endorse explanations that, in their eyes, best unify the available observations to a coherent and simple account (Lipton, 2003; Lombrozo, 2016). Consequently, people might adhere to a conspiracy theory if they believe it to be a better explanation than the narrative provided by official sources such as the government (henceforth called ‘official account’). One major advantage of conspiracy theories is that they can include more observations than the respective official accounts (Keeley, 1999). Events are often determined by complex, not perfectly identifiable causal mechanisms, or even mere chance as well as a combination of all (Taleb, 2001). Consequently, official accounts will likely contain explanatory gaps that cannot be filled by objective evidence. And even if official accounts present a comprehensive explanation, there will always be observations the official accounts are silent about, because they were overlooked or simply deemed irrelevant. Conspiracy theories, in turn, promise to explain not only the observations included in the official accounts but also the hidden underlying proceedings of the events (Sunstein & Vermeule, 2009). To do so, conspiracy theories incorporate observations not mentioned by the official accounts (e.g., people complaining about headaches after sighting contrails as evidence for chemtrails). Ironically, by assuming that the conspirators engage in a cover-up, even the absence of evidence or disproving evidence can be interpreted as evidence for a conspiracy (i.e., the cover-up was successful; Keeley, 1999; Sunstein & Vermeule, 2009).

Conspiracy theories, then, appear to provide a broader explanation than the respective official accounts. If conspiracy theories are attractive because they provide appealing explanations, their perceived credibility should be attenuated if we reduce their potency to provide a broad explanation. In the present research, we tested this assumption by examining the credibility of conspiracy theories that contain varying amounts of explanatory information.

Threat-Related Information

Everyday life entails many forms of threat to one’s self and well-being with various potential negative consequences, such as physical (e.g., bodily harm), relational (e.g., betrayal), material (e.g., poverty), social (e.g., injustice), and value-based (e.g., loss of purpose) detriments (Johnstone et al., 2018). Therefore, people need to adequately detect and manage threatening conditions, and, indeed, human perception has evolved to be extra sensitive to cues of potential threat (Haselton & Buss, 2000; Neuberg et al., 2011). For instance, threat-related information increases thorough information processing (Gadarian & Albertson, 2014; Taylor, 1991) and ease of recall (Bebbington et al., 2017). Moreover, threat-related information is more frequently transmitted than neutral information (Blaine & Boyer, 2018), and sources of threat-related information appear more competent than sources not mentioning any threat (Boyer & Parren, 2015). These findings are all in line with error management theory (Haselton & Buss, 2000), proposing that from an evolutionary perspective it is better to be safe than sorry whenever the cost of false alarms (e.g., see threat where there is none) is lower than that of misses (e.g., overlooking an actual threat).

Conspiracy theories put a lot of emphasis on threat-related information as they, by definition, allege powerful agents to engage in malevolent behavior (Swami & Furnham, 2014). Admittedly, the corresponding official accounts likely contain threat-related information to some degree as well, because the phenomena in question usually entail threat themselves (e.g., disasters; van Prooijen & Douglas, 2017). Conspiracy theories, however, give threat-related information more weight as the core element of any conspiracy theory is to uncover a covert threat. People might thus be drawn to conspiracy theories by the theories’ inherent focus on threat-related information.

In addition to the mere inclusion of threat-related information, conspiracy theories might also gain credibility as they promise to manage the threat they propose to have uncovered (Douglas et al., 2017). Specifically, conspiracy theories are particularly prevalent in the context of major, threatening events (van Prooijen & Douglas, 2017), when people experience aversive feelings such as anxiety (Grzesiak-Feldman, 2013) and lack of control (van Prooijen & Acker, 2015). Because major negative events usually involve a complex causality or even simple chance (e.g., the emergence of a deadly virus), the source of threat often remains ambiguous and hence uncontrollable (Sunstein & Vermeule, 2009; Taleb, 2001). Conspiracy theories, in turn, blame malevolent conspirators as the sole source of threat (e.g., the virus was created in the lab; Imhoff & Lamberty, 2020a), making the complex, ambiguous threat more concrete and understandable (Franks et al., 2013; Sullivan et al., 2010). Having identified conspirators as the source of threat, people gain (perceived) informational control (e.g., as a form of cheater detection; Bost & Prunier, 2013) and can then adapt their behavior to counter the threat (i.e., approach reaction) or minimize its effect on themselves (i.e., avoidance reaction; van Prooijen & van Vugt, 2018). For instance, they might engage in protest to trigger social change (Imhoff & Bruder, 2014) or display withdrawal behaviors, such as non-adherence to medical treatments (Bogart et al., 2010; Oliver & Wood, 2014b) or refusing to vote (Uscinski & Parent, 2014). Also, attributing threat to malevolent conspirators instead of assuming complex, systemic causes can buffer people’s satisfaction with their social status quo (Jolley et al., 2018).1 In the present research, we thus tested whether the inherent feature of conspiracy theories to identify alleged threats affects their credibility.

Furthermore, on top of the separate effects of explanatory and threat-related information, the credibility of conspiracy theories may be particularly high if both types of information are combined. Specifically, the perception of threat-related information triggers sense-making processes to quickly understand the nature of the threat and to take appropriate action in time (Neuberg et al., 2011; van Prooijen, 2020). Therefore, a conspiracy theory that first highlights the threat posed by malevolent conspirators and then provides further explanatory information could be particularly credible.

Conspiracy Mentality as a Moderator of the Information Effect

Merely focusing on the type of information as a predictor of the credibility of conspiracy theories, however, might be a too-narrow focus. After all, people’s predispositions and prior beliefs affect the way information is interpreted (Jerit & Barabas, 2012; Kunda, 1990). One predisposition recently suggested to explain the belief in conspiracy theories is conspiracy mentality (Dagnall et al., 2015; Imhoff & Bruder, 2014), a mindset that is grounded on higher-order beliefs constituting a distrustful worldview constantly expecting deception (e.g., ‘nothing is as it seems’, ‘powerful people exploit their power’; Meuer & Imhoff, 2021; Wood et al., 2012).

In terms of explanatory value, being confronted with a conspiracy theory, a person with such a mindset may perceive the explanatory information to be particularly coherent with their prior knowledge and beliefs (Mirabile & Horne, 2019). Also, people high in conspiracy mentality tend to perceive causal patterns in randomness (van der Wal et al., 2018; van Prooijen et al., 2018), which might additionally foster the impression that the conspiratorial explanation is coherent. The perceived coherence of an explanation, in turn, is another decisive feature determining the appeal of an explanation (Lipton, 2003). Therefore, the higher people’s conspiracy mentality, the more explanatory value they might see in the explanatory information of conspiracy theories.

Similarly, people’s conspiracy mentality might determine the relevance of threat-related information for the perceived credibility of conspiracy theories. Specifically, people who tend to chronically feel vulnerable more often perceive situations as threatening (Öhman et al., 2001; Schaller et al., 2003). People with a pronounced conspiracy mentality, in turn, feature chronic feelings of vulnerability (Abalakina-Paap et al., 1999; Imhoff & Bruder, 2014), and they might thus be extra alert to cues of threat. Hence, people’s conspiracy mentality might moderate the effect of highlighting the threat posed by conspirators on conspiracy theories’ credibility.

Furthermore, conspiracy mentality might moderate the effect of information not only because individuals high in conspiracy mentality are particularly susceptible to explanatory and threat-related information, but also because individuals low in conspiracy mentality are particularly unsusceptible. Specifically, people low in conspiracy mentality feature high levels of trust in authorities and the official accounts they offer and might therefore reject conspiracy theories of any kind as a matter of principle (Imhoff & Bruder, 2014). Consequently, specific explanatory or threat-related information would not be used at all to assess the credibility of conspiracy theories. For these reasons, we additionally examined whether people’s conspiracy mentality moderates the effect of providing explanatory and threat-related information on the conspiracy theories’ credibility.

The Present Research

In two experimental online studies, we examined whether including explanatory (H1) or threat-related (H2) information in conspiracy theories increases their credibility and whether the conjunction of explanatory and threat-related information exerts an additional effect (H3). Furthermore, we tested whether participants’ conspiracy mentality moderates the effect of explanatory (H4) and threat-related (H5) information. To this end, in both studies, participants read four pretested conspiracy theories about different real-world phenomena (two already-existing and two novel conspiracy theories) and evaluated the credibility of each theory. We experimentally varied whether the texts contained explanatory and threat-related information, either between-subjects (Study 1) or within-subjects (Study 2).

Open Practices

For both studies, we provide the survey script, including study material, all main measures, and additional exploratory variables (German and English), as well as the pre-registration, data, syntax, and a report of supplemental exploratory analyses online on the Open Science Framework (https://osf.io/9z3gk/).

Materials Pretest

To examine the impact of explanatory and threat-related information on conspiracy theories’ credibility, we aimed to construct four versions of a conspiracy theory (2 × 2; explanatory power: low vs. high x threat: low vs. high) for each of four real-world phenomena. For two phenomena, the texts should portray existing conspiracy theories (e.g., vaccinations are ineffective and serve the financial enrichment of pharmaceutical companies), and for the remaining two phenomena, texts should depict novel conspiracy theories (e.g., the massive delay during the construction of BER Airport is due to the erection of a secret military facility below the airport).

To this end, we created various statements suggesting a conspiracy for a total of six phenomena (i.e., we started with six phenomena to select the four most suitable). Specifically, for each phenomenon, we assembled six to eight distinct statements containing predominantly explanatory information (e.g., ‘The list of 75.000, sometimes serious, airport construction issues were deliberately staged in order to gain time and money for the inconspicuous construction of the military facility.’) and four to six statements containing predominantly threat-related information (e.g., ‘The federal government uses the military facility to store secret nuclear weaponsa very risky venture in a city of millions.’), the number of statements per phenomenon depending on the availability of non-redundant content.2

We then conducted a pretest to choose statements for each phenomenon that were most suitable to construct four distinct versions of a conspiracy theory with varying explanatory power (low vs. high) and threat (low vs. high). A total of 246 psychology undergraduates participated in this online study. We excluded 25 participants who did not complete the study (n = 21) or failed to correctly answer an attention check (n = 4), leading to a final sample size of N = 221 (192 women, 28 men, 1 nonbinary; Mage = 23.54, SD = 4.84). After a short training defining explanatory power and threat, participants were assigned to judge the statements for one of the six phenomena (35 ≤ n ≤ 38). They first read a short introduction that briefly described the phenomenon and the conspiracy-related claim. They then received all statements in random order and indicated for each to which extent it described a threat (‘1 – no threat’ to ‘5 – very great threat’) and possessed explanatory power (‘1 – no explanatory power’ to ‘5 – very great explanatory power’). Furthermore, they rated the extent to which each statement was understandable and irrelevant for the phenomenon in question (‘1 – not at all’ to ‘5 – completely’).

To construct four versions of a conspiracy theory for each phenomenon, we selected six statements (i.e., three explanatory and three threat-related) that best differentiated between explanatory power and threat while fulfilling the following criteria: As explanatory statements, we only accepted statements that were rated to a) contain significantly more explanatory power than threat, b) possess sufficient explanatory power (mean significantly above 3), but to c) not contain too much threat (mean significantly below 4). Likewise, threat-related statements had to be rated to a) possess significantly more threat than explanatory power, b) contain sufficient threat (mean significantly above 3), but to c) not possess too much explanatory power (mean significantly below 4). In addition, both types of statements had to be d) understandable (mean significantly above 3) and e) not too irrelevant for the phenomenon in question (mean significantly below 4).3 Based on these criteria, the statements of five of the six phenomena were suitable (i.e., vaccinations, condensation trails, BER airport, intelligent speakers, bomb disposals). For each phenomenon, the three averaged explanatory statements posed significantly more explanatory power than the threat-related statements (5.50 < ts < 10.36, ps < .001, 0.93 < dzs < 1.70), and, vice versa, the averaged threat-related statements described significantly more threat than the explanatory statements (7.00 < ts < 18.88, ps < .001, 1.15 < dzs < 3.19). Also, the three averaged explanatory statements were significantly higher in explanatory power than in threat (3.16 < ts < 10.95, ps ≤ .003, 0.52 < dzs < 1.80), and the averaged threat-related statements were significantly higher in threat than in explanatory power (8.82 < ts < 16.21, ps < .001, 1.54 < dzs < 2.66, see Table A on OSF for all results for each phenomenon).

Because only four phenomena were needed for the present studies and the conspiracy claim on bomb disposals involved a larger set of presumptions (i.e., extraterrestrial intelligence as agents), we omitted this phenomenon. For each of the remaining four phenomena, we then constructed four versions of a conspiracy theory with 1) only a short introductory passage describing the phenomenon and the conspiracy-related claim (low explanatory power, low threat version), 2) the introductory passage plus the three explanatory statements (high explanatory power, low threat), 3) the introductory passage plus the three threat-related statements (low explanatory power, high threat), or 4) the introductory passage plus both explanatory and threat-related statements (high explanatory power, high threat). All 4 (phenomenon) × 4 (version) conspiracy theories are available on OSF.

Study 1

Method

Participants and Design

We invited psychology undergraduates from a German distance-learning university to participate in a study on the perception of societal phenomena. We preregistered to assess a minimum of 150 participants, and a total of 156 undergraduates completed our German-language study in exchange for course credits.4 As preregistered, we excluded participants who did not consent to data usage (n = 7), indicated that they were distracted or did not follow the instructions (n = 9), or failed a test question that assessed whether participants have read the material (n = 4), yielding a final sample size of N = 136 (109 women, 27 men; Mage = 32.23, SD = 9.37). Participants were randomly assigned to receive all conspiracy theories in one of the four versions (low explanatory power, low threat, n = 34; high explanatory power, low threat, n = 32; low explanatory power, high threat, n = 33; high explanatory power, high threat, n = 37), comprising a 2 (explanatory power) × 2 (threat) between-subjects design.5

Procedure

To start the online experiment, participants first had to read legal and ethical information and provide informed consent. They then responded to the 12-item Conspiracy Mentality Scale (7-point scale from strongly disagree to strongly agree, e.g., ‘A few powerful groups of people determine the destiny of millions, Cronbach’s α = .91, Imhoff & Bruder, 2014). To conceal our research objective, we additionally included the Balanced Short Scale for Right-Wing Authoritarianism (6 items, Aichholzer & Zeglovits, 2015) and the General Just-World-Scale (6 items, Dalbert et al., 1987) in the same block of questions. Next, we presented an attention check, and participants could only proceed after responding correctly. Participants then read one conspiracy theory for each of the four phenomena in random order, all in the same version according to their experimental assignment. For each theory, they indicated how credible they considered the presented theory (7-point scale from ‘1 – not credible at all’ to ‘7 – completely credible’) and, for secondary analyses, how likely it was that they would talk to someone about the theory after completing the study (‘1 – very unlikely’ to ‘7 – very likely’). Subsequently, participants were asked to select the four covered phenomena out of a list of eight, testing whether they had actually read the four texts (serving as an exclusion criterion). For exploratory purposes, participants then had the opportunity to state in an open text format which of the conspiracy theories they found particularly credible or particularly incredible. Subsequently, they provided demographic information. After a full debriefing, participants stated whether they had followed the instructions and whether they had been able to participate without distraction, and they could decide about the use of their data (all serving as exclusion criteria).

Results

To examine whether the inclusion of explanatory or threat-related information increased the conspiracy theories’ credibility (H1–H3), we computed a 2 × 2 between-subjects ANOVA with explanatory power (low vs. high) and posed threat (low vs. high) as independent variables and the averaged credibility of the four conspiracy theories (Cronbach’s α = .74) as the dependent variable. According to sensitivity analyses with G*Power (Faul et al., 2007), this allowed to detect main and interaction effects of explanatory and threat-related information sized ηp2 ≥ .055 with sufficient power (specifications: ANOVA fixed effects, special, main effects and interactions, α = .05, 1–β = .80, N = 136, dfnum = 1, 4 groups, f ≥ .242). The analysis yielded neither a significant main effect of explanatory power, F(1, 132) = 0.08, p = .784, ηp2 = .001, nor a significant main effect of threat, F(1, 132) = 0.04, p = .837, ηp2 < .001, nor a significant interaction of the two, F(1, 132) = 0.01, p = .914, ηp2 < .001 (see Table 1 for descriptive data). To gain insight into the strength of empirical support for the H0, we additionally computed an exploratory Bayesian between-subjects ANOVA with JASP (Version 0.14.1.0), using the default prior options (r = 0.5 for the fixed effects). Bayes factors indicated substantial evidence for H0 for both explanatory, BFexclusion = 7.58, and threat information, BFexclusion = 7.70, as well as the interaction effect, BFexclusion = 39.06 (see OSF for the JASP output). Thus, we found no support that the conspiracy theories’ credibility differed as a function of the specific information contained.

Table 1

Descriptive Data of the Credibility Judgments across Information Conditions (Study 1).

Condition Low Threat n High Threat n Overall n

M SD M SD M SD

Low Explanatory Power 2.34 1.26 34 2.40 1.27 33 2.37 1.26 67
High Explanatory Power 2.30 1.13 32 2.32 1.02 37 2.32 1.07 69
Overall 2.32 1.19 66 2.36 1.14 70 2.34 1.16 136

Note: Credibility judgments were given on 7-point scales from ‘1 – not credible at all’ to ‘7 – completely credible’.

Next, we tested whether participants’ conspiracy mentality moderated the effect of explanatory and threat-related information on the theories’ credibility (H4, H5). To this end, we again computed a 2 × 2 between-subjects ANOVA, but this time we additionally included conspiracy mentality as a covariate. This ANCOVA allowed to detect interaction effects of type of information and conspiracy mentality sized ηp2 ≥ .055 with sufficient power (specifications: linear multiple regression: fixed model, single regression coefficient, two-tailed, α = .05, 1–β = .80, N = 136, 7 predictors, f² ≥ .059; Faul et al., 2007).6 The analysis yielded a significant main effect of conspiracy mentality, F(1, 128) = 57.34, p < .001, ηp2 = .309. There was, however, neither a significant interaction of conspiracy mentality with explanatory power, F(1, 128) = 0.69, p = .407, ηp2 = .005, nor with threat, F(1, 128) = 0.03, p = .869, ηp2 < .001. Likewise, we additionally computed an exploratory Bayesian between-subjects ANCOVA with JASP, using the default prior options. Bayes factors indicated substantial evidence for H0 for the interaction of conspiracy mentality with both explanatory, BFexclusion = 10.15, and threat information, BFexclusion = 18.78. Furthermore, there was overwhelming support for the main effect of conspiracy mentality, BFinclusion = 1,060,000,000. Hence, it was solely participants’ predisposition to believe in conspiracies that predicted how credible they deemed the conspiracy theories. The inclusion of explanatory or threat-related information did not make any difference.7

Discussion

The findings of this study do not suggest that offering an explanation or addressing threat affects the perceived credibility of conspiracy theories. Instead, whether someone found a specific conspiracy theory credible depended solely, and substantially (i.e., with large effect sizes), on their general predisposition to see conspiracies behind societal phenomena. To properly interpret these results, however, it is important to emphasize that this study comprised a between-subjects design; that is, participants received all conspiracy theories in only one of the four text versions. Even though this design should assure that participants remained blind to the conditions and hypotheses (i.e., high internal validity), it might reflect reality only to a limited extent (i.e., low external validity). In the real world, a multitude of news and theories containing a wide variety of information compete for supporters. Here, differences of all kinds between the various theories are salient. People, in turn, are particularly vigilant for differences (Olson & Janes, 2002). Therefore, to examine the influence of specific types of information on conspiracy theories’ credibility, a within-subjects design might be more informative as it makes the presence or absence of explanatory and/or threat-related information more salient (Greenwald, 1976; see Wänke & Hansen, 2015 for an example of how findings in between- vs. within-subjects designs can differ). In addition, within-subjects designs typically have greater power to detect effects (as between-subjects differences are not added to the error variance). Hence, before jumping to a conclusion based on the between-subjects data, we conducted a second study in which we closely replicated Study 1 but in a within-subjects design.

Study 2

Method

Participants and Design

We again aimed for a minimum sample size of 150 participants. Because we needed to exclude a relatively large number of participants in Study 1, we requested 200 Amazon Mechanical Turk™ workers to participate in a study on the perception of societal phenomena for a $2.00 wage. As preregistered, we excluded participants who did not consent to data usage (n = 2), indicated that they were distracted or did not follow the instructions (n = 4), or failed an attention check (n = 37), yielding a final sample size of N = 157 (69 women, 85 men, 3 nonbinary; Mage = 36.32, SD = 11.01). In contrast to Study 1, participants received the conspiracy theories for the four phenomena each in a different version, comprising a 2 (explanatory power: low vs. high) × 2 (threat: low vs. high) within-subjects design.

Procedure

The procedure was almost identical to that of Study 1 but with two major adaptions. First, we used an English translation of the material, which was carefully proofread by a native English speaker. Second, participants again received one conspiracy theory for each of the four phenomena in random order, but this time each in a different version (i.e., low explanatory power, low threat; high explanatory power, low threat; low explanatory power, high threat; high explanatory power, high threat). Across all participants, we counterbalanced in which version each theory was presented by assigning participants to one of four theory-version combinations organized in a Latin square.

Results and Discussion

First, to account for differences between the overall credibility of the four conspiracy theories (BER Airport: M = 3.12, SD = 1.76; intelligent speakers: M = 3.69, SD = 2.03; condensation trails: M = 2.52, SD = 1.86; vaccinations: M = 2.88, SD = 2.02), we centered the ratings of each theory separately by subtracting the mean credibility of each theory from the respective ratings (as preregistered). To examine whether the inclusion of explanatory or threat-related information increased the conspiracy theories’ credibility (H1–H3), we computed a 2 × 2 within-subjects ANOVA with explanatory power (low vs. high) and posed threat (low vs. high) as independent variables and the centered credibility as the dependent variable. This allowed to detect main and interaction effects of explanatory and threat-related information sized ηp2 ≥ .049 with sufficient power (specifications: ANOVA: repeated measures, within factors, α = .05, 1–β = .80, N = 157, 1 group, 2 measurements, ɛ = 1, f ≥ .226; Faul et al., 2007).8 The analysis yielded neither a significant main effect of explanatory power, F(1, 156) = 0.85, p = .357, ηp2 = .005, nor a significant main effect of threat, F(1, 156) = 3.09, p = .081, ηp2 = .019, nor a significant interaction of both, F(1, 156) = 0.59, p = .443, ηp2 = .004 (see Table 2 for non-centered means and standard deviations). To gain insight into the strength of empirical support for the H0, we additionally computed an exploratory Bayesian within-subjects ANOVA with JASP, using the default prior options. Bayes factors indicated substantial evidence for H0 for both explanatory, BFexclusion = 10.91, and threat information, BFexclusion = 4.00, as well as the interaction effect, BFexclusion = 49.38. Thus, we found no support that including explanatory or threat-related information affected the conspiracy theories’ credibility.

Table 2

Descriptive Data of the Credibility Judgments across Information Conditions (Study 2).

Condition Low Threat High Threat Overall

M SD M SD M SD

Low Explanatory Power 3.04 1.96 2.97 1.97 3.00 1.72
High Explanatory Power 3.23 1.95 2.97 1.97 3.10 1.67
Overall 3.13 1.67 2.97 1.68 3.05 1.53

Note: Credibility judgments were given on 7-point scales from ‘1 – not credible at all’ to ‘7 – completely credible’. N = 157.

Next, we tested whether participants’ conspiracy mentality moderated the effect of explanatory and threat-related information on the conspiracy theories’ credibility. To this end, we again computed a 2 × 2 within-subjects ANOVA but additionally included conspiracy mentality (Cronbach’s α = .90) as a covariate. This rm-ANCOVA allowed to detect interaction effects of type of information and conspiracy mentality sized ηp2 ≥ .049 with sufficient power (specifications: ANOVA: repeated measures, within-between interaction, α = .05, 1–β = .80, N = 157, 2 groups, 2 measurements, ɛ = 1, f ≥ .226; Faul et al., 2007).9 The analysis yielded a significant main effect of conspiracy mentality, F(1, 155) = 30.54, p < .001, ηp2 = .17. There was, however, no significant interaction of conspiracy mentality with neither explanatory power, F(1, 155) = 0.04, p = .833, ηp2 < .001, nor threat, F(1, 155) = 1.75, p = .188, ηp2 = .011. Likewise, we additionally computed an exploratory Bayesian within-subjects ANCOVA with JASP, using the default prior options. Bayes factors indicated substantial evidence for H0 for the interaction of conspiracy mentality with both explanatory, BFexclusion = 40.14, and threat information, BFexclusion = 8.74. Furthermore, there was overwhelming support for the main effect of conspiracy mentality, BFinclusion = 35,121.80. Hence, participants’ predisposition to believe in conspiracies was, again, the only predictor for how credible they deemed the theories—irrespective of the inclusion of explanatory or threat-related information.10 Overall, the pattern of results is thus identical to the results of Study 1.

General Discussion

In two experimental online studies, we examined whether certain content features of conspiracy theories about real-world phenomena affected their perceived credibility. Specifically, we tested whether conspiracy theories were more credible if they elaborated on the phenomena’s explanation or emphasized the threat posed in the respective situations (or both). Furthermore, we tested whether the effect of explanatory and threat-related information was contingent on participants’ predisposition to believe in conspiracies.

In both studies, it was not the specific information that determined the perceived credibility of the theories but participants’ conspiracy mentality. In other words, it did not matter whether participants merely received a short, unfounded conspiracy claim or a more thorough elaboration on this claim (i.e., providing six more arguments), participants based their credibility judgment solely on a set of a priori beliefs about the world. Importantly, even in the case of entirely novel conspiracy theories about which participants could not have had prior knowledge, additional information did not exert an effect. This finding underlines the importance of a general, stable conspiratorial mindset in understanding how people come to believe in conspiracy theories (Dagnall et al., 2015; Frenken & Imhoff, 2021; Imhoff & Bruder, 2014). Specifically, consistent with the notion that a conspiratorial mindset entails prejudice against powerful groups (Imhoff & Bruder, 2014) and a distrustful worldview constantly expecting deception (Wood et al., 2012), conspiracy beliefs might particularly represent disbelief in the official accounts rather than belief in specific conspiracy theories (Wood & Douglas, 2013). Recall that people turn to the best available explanation (Lipton, 2003), and the decision to deem a conspiracy theory credible thus not only depends on its explanatory value but also on the explanatory value of the competing official account. Consequently, people might turn to a conspiracy theory not because of specific information provided by the theory but rather because they discredit the official account as a function of their conspiratorial mindset. However, because we did not assess the credibility of the official accounts, this question remains open for future research.

Of course, the non-significant findings of the null hypothesis significance tests (NHST) alone are inconclusive as to whether there is indeed no effect of the additional information (i.e., true negative) or whether the studies were merely underpowered to uncover a non-negligible effect (i.e., false negative). According to effect-size sensitivity analysis, the studies had sufficient power (≥80%) to detect main and interactions effects sized ηp2 ≥ .055 (Study 1) and ηp2 ≥ .049 (Study 2). Consequently, the NHST analyses were underpowered to detect smaller effects. Because interaction effects, in general, tend to be smaller than main effects (Simonsohn, 2014), the NHST analyses might thus have been particularly limited to detect relevant interaction effects (i.e., explanatory power × threat, H3; explanatory power × conspiracy mentality, H4; threat × conspiracy mentality, H5). On the other hand, however, there are at least three reasons to assume that the null effects likely signify the absence of effects.

First, although sensitivity analysis indicated that the within-subjects design of Study 2 allowed to detect similarly sized effects as the between-subjects design of Study 1, within-subjects designs typically yield larger effects than between-subjects designs (Lakens, 2013; Schäfer & Schwarz, 2019). Specifically, in a within-subjects design, the between-subjects variance is not included in the error term, and the same mean difference will thus yield a greater effect size as compared to a between-subjects design (if the measurements are positively correlated, which was the case in Study 2, .462 < r < .548). Hence, Study 2 was more likely to detect relevant mean differences that might have gone unnoticed in Study 1. Second, even descriptively, the observed pattern of means was predominantly not into the direction suggested by the hypotheses. For instance, in both studies, the mean credibility of conspiracy theories that combined both explanatory and threat-related information was even marginally lower than the credibility of the much shorter, baseline conspiracy theories without such information (see Tables 1 and 2). Finally, we supplemented the preregistered NHST analyses with Bayesian inference, which signified that the data was substantially more likely to be observed in the absence (H0) than in the presence of any of the proposed effects (H1–H5; between 4.00 and 49.38 times more likely, depending on the Study and the specific effect).

Taken together, the findings of both studies suggest that the type of information did indeed not impact the credibility of the conspiracy theories. Yet, we cannot conclude that such information is generally irrelevant for the judgment of conspiracy theories’ credibility. Instead, there might be circumstances under which specific information affects the credibility judgments. For instance, one reason we did not find an effect of explanatory and threat-related information may lie in the topicality of the real-world phenomena we chose as material. Specifically, all conspiracy theories referred to phenomena that were temporally indefinite or covered a long period of time (e.g., vaccinations are carried out continuously) rather than relating to concrete single events as a starting point (e.g., terrorist attacks, death of celebrities). In contrast to one-time events, such long-lasting phenomena do not feature the sudden occurrence of a threatening, unclear situation. However, the explanatory value of conspiracy theories might be particularly decisive in situations characterized by a complex unclear causality, which leave people in a state of epistemic uncertainty and motivate sense-making processes (Douglas et al., 2017; van Prooijen & Douglas, 2017). Similarly, the identification and management of threat posed by conspirators might be particularly relevant after unclear, potentially threatening events when people lack control (van Prooijen, 2020; van Prooijen & Douglas, 2017). Consequently, participants might not have utilized the explanatory and threat-related information because they might not have felt the need to find an explanation or to regain control. Then, explanatory and threat-related information might exert an effect during the emergence of conspiracy theories immediately after the occurrence of an event, when such needs are highest. This is an interesting question for future research.

Regarding explanatory information, there is an additional argument for why such information might affect conspiracy theories’ credibility particularly shortly after the respective events occurred. Recall that people turn to the explanation that they believe best explains the available observations (Lipton, 2003) and that conspiracy theories compete against the accounts provided by official sources. In the present research, we utilized conspiracy theories about real-world phenomena for which official accounts were already well known. The information provided by the conspiracy theories (particularly the novel ones) thus had to be weighed against a long-established official explanation. In contrast, in the immediate aftermath of an event, both the official account and conspiracy theories are established in parallel, and explanatory information might thus be particularly relevant for people to decide which explanation to turn to.

Another potential reason we did not find an effect of explanatory or threat-related information might lie in the characteristics of the information provided. In terms of explanatory information, for instance, we had pretested that the statements possessed high explanatory power, but they were still confined to the subjective interpretation of real observations (e.g., that, in the past, other factors such as better hygiene have led to a reduction in infectious diseases and vaccinations are thus unnecessary). That is, we did not include faked objective evidence in favor of the conspiracy theories (e.g., that a whistleblower from the pharma industry published incriminating documents) as we did not want to presuppose that conspiracy theories always contain false evidence (and thus imply that they are always wrong, which they are not; see Grimes, 2016). Yet, faking evidence can be a strategy of conspiracy theories (or, in fact, any narrative), and not being tied to true observations equips one with many degrees of freedom for elaborating a broad, consistent explanation. Bost and Prunier (2013), for instance, found that conspiracy theories about fictive phenomena that contained strong fictive evidence were perceived to be more credible than theories that contained only weak fictive evidence. Furthermore, conspiracy theories might use epistemic defense mechanisms by including information that immunizes the theory against falsification (Boudry & Braeckman, 2012). For instance, the absence of evidence for a conspiracy theory can be interpreted as the result of a successful cover-up and, hence, as supporting evidence. Likewise, even contradicting evidence, such as eyewitness accounts or official documents, can be put as a deliberate distraction to conceal what really happened (Clarke, 2002). Including such defensive strategies might also foster the credibility of conspiracy theories. Therefore, it might be interesting to systematically test the effect of different types of explanatory information.

Regarding threat-related information, we had pretested that the statements described a great threat, but we assessed neither the perceived probability of these threats coming into effect nor whether participants indeed felt threatened. It is thus possible that participants read about the theoretical threat but deemed it very unlikely and decided to not utilize this information. Although human threat-detection seems to be rather insensitive to the perceived probability of threats (Blaine & Boyer, 2018; Slovic, 2016), future research may control for this variable to test for an effect of threat-related information. Furthermore, we employed a very broad definition of threat, and the information used in the conspiracy theories thus included various types of threat (i.e., physical, relational, material, social, value-based). Although past theoretical and empirical work does not allow any predictions about whether different types of threat affect the credibility of conspiracy theories to varying degrees, distinguishing between types of threat may be a worthwhile endeavor for future research.

Finally, whether the explanatory power and the posed threat of conspiracy theories affect their credibility might depend on the type of reasoning process readers apply (i.e., systematic vs. heuristic processing; see Chaiken, 1980). In the present study, to reduce the risk that lacking effects are solely the result of inattentive reading, we included an attention check before reading the conspiracy theories, which has been shown to induce systematic processing (Hauser & Schwarz, 2015). On the one hand, given that our participants were indeed motivated to process information systematically (i.e., to carefully evaluate the given evidence), it is particularly surprising that explanatory information had no effect. On the other hand, however, the affect-laden quality of threat-related information might be particularly persuasive during heuristic processing (Chaiken, 1980). Therefore, future research is needed to examine whether readers’ reasoning style moderates the effect of content features (see also Swami et al., 2014, for findings that analytical thinking generally reduces conspiracy belief).

To conclude, the present research adds to a handful of studies that examined the credibility of conspiracy theories considering both person characteristics (i.e., conspiracy mentality) and the content of the theories themselves (i.e., explanatory and threat-related information). Despite all limitations, our results clearly show that there is no mechanistic, almost automatic effect of merely adding explanatory or threat-related information. Findings also support the significance of people’s conspiratorial predisposition in predicting the credibility of specific conspiracy theories. Further research is needed to explore whether there are circumstances under which specific types of information affect the perceived credibility of (certain types of) conspiracy theories.

Notes

1Recent meta-analytical findings by Stojanov and Halberstadt (2020) suggest that manipulations of control only affected people’s belief in specific conspiracy theories but not their general conspiracy beliefs. However, as we propose that the control deprivation due to a threatening situation affects the belief in specific conspiracy theories that arise in response to this threat, the findings by Stojanov and Halberstadt (2020) do not contradict our reasoning. 

2For further, unrelated studies, we designed and pretested several additional statements. A list of all statements can be found on OSF. 

3We initially intended to manipulate explanatory power and posed threat by presenting an equal number of statements that were either low in both content features, high in only one feature, or high in both features. Unfortunately, however, the pretest indicated that there were neither enough statements that were both low in explanatory power and low in threat nor enough statements that were high in both features. Therefore, to achieve the greatest possible feature manipulation, we instead decided to either present or not present highly explanatory [threat-related] statements (thus accepting that the feature manipulation would covary with the length of the text version). Also, when using realistic and relevant material, it might be hardly possible to construct statements that are highly threat-related without being explanatory at least to some degree, and vice versa (e.g., that a secret military facility stores secret nuclear weapons is a major threat, but it also implies a potential reason for the existence of the secret facility). To nevertheless clearly distinguish between the highly explanatory and the highly threat-related statements, we defined that the explanatory statements should not reach a critical level of threat (i.e., rating ‘4 – high threat’) and that the threat-related statements should not reach a critical level of explanatory power (i.e., rating ‘4 – high explanatory power’). Unfortunately, a stricter cut-off criterion (e.g., rating ‘3’) would have resulted in the exclusion of too many statements. Thus, the chosen criterion is a compromise between a highly discriminative manipulation and a high number of realistic statements. 

4The minimum sample size was determined by several criteria: First, we computed a priori power analyses with G*power, which yielded N > 128 to detect moderate effects with sufficient power (ANOVA: fixed effects, special, main effects and interactions, f = .25, a = .05, 1–b = .80, dfnum = 1, 4 groups). Second, H4 and H5 rely on correlation estimates, which can be inaccurate for small samples. Schönbrodt and Perugini (2013) recommend N = 250 for typical research scenarios, but the necessary sample size for stable correlation coefficients depends on the true effect size as well as the operationalization of ‘stability’. Finally, we were constrained by limited resources (i.e., in Study 1, the response rate of the participant pool is comparably low for long online studies, and in Study 2, the financial resources for the MTurk acquisition were limited). In sum, we decided to preregister N > 150, which is a tradeoff between feasibility and correlation coefficient stability, but it also exceeds the N suggested by the a priori power analyses. 

5Furthermore, to have a more direct measure of how credible the explanatory and threat-related statements are, we included a fifth condition (n = 32) in which participants assessed the credibility of each individual statement (vs. the overall credibility of the whole conspiracy theory in the main conditions). Analyses can be found on OSF. 

6G*Power is not suitable to perform power analyses regarding the covariate-factor interaction of ANCOVAs. However, the planned 2 × 2 ANCOVA is equivalent to a linear multiple regression model with explanatory power, threat, conspiracy mentality, and the interaction terms explanatory power × threat, conspiracy mentality × explanatory power, conspiracy mentality × threat, and conspiracy mentality × explanatory power × threat as predictors. We thus calculated the sensitivity to detect the interaction of type of information with conspiracy mentality based on this regression logic. 

7We additionally repeated the main analyses with participants’ intention to talk about the conspiracy theories as an alternative dependent variable, which, however, yielded the exact same pattern as the main analyses. Furthermore, because the material included two already-existing and two novel conspiracy theories, we could test whether the effects of explanatory or threat-related information were contingent on the novelty of the theories. However, when additionally including the novelty of the conspiracy theory (already-existing vs. novel) as a within-subjects factor, we found no support for a moderating effect, all Fs < 0.92, ps > .341. The full analyses can be found on OSF. 

8We followed the recommendation by Giner-Sorolla et al. (2020) to select the option ‘effect sizes as in SPSS’ and to enter the dfnum of the desired effect plus one as the number of measures when computing power analyses for repeated measures ANOVAs. 

9As there is, to our knowledge, no conventional method for performing power analysis for the covariate-factor interaction in rmANCOVA designs, we conservatively approximated the sensitivity to detect the interaction of type of information with conspiracy mentality with a 2 (explanatory power) × 2 (posed threat) × 2 (conspiracy mentality) mixed ANOVA, hence, treating the continuous covariate conspiracy mentality as a two-level between-subjects factor (low vs. high conspiracy mentality). We again selected the option ‘effect sizes as in SPSS’ and entered dfnum plus one as the number of measures, as suggested by Giner-Sorolla et al. (2020). 

10We additionally repeated the main analyses with participants’ intention to talk about the conspiracy theories as an alternative dependent variable, which, however, yielded the exact same pattern as the main analyses. The full analyses can be found on OSF. 

Ethics and Consent

All procedures were in accordance with the ethical guidelines specified in the APA Code of Conduct as well as the authors’ national ethics guidelines.

Acknowledgements

The authors would like to thank Carin Molenaar for proofreading the English translation of the study material for Study 2.

Funding Information

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Competing Interests

The authors have no competing interests to declare.

Author Contributions

Marcel Meuer: Conceptualization, Methodology, Software, Validation, Formal Analysis, Investigation, Resources, Data Curation, Writing – Original Draft, Visualization Project Administration. Aileen Oeberst: Conceptualization, Methodology, Writing – Review & Editing, Supervision, Roland Imhoff: Conceptualization, Methodology, Writing – Review & Editing, Supervision.

References

  1. Abalakina-Paap, M., Stephan, W. G., Craig, T., & Gregory, W. L. (1999). Beliefs in conspiracies. Political Psychology, 20(3), 637–647. DOI: https://doi.org/10.1111/0162-895X.00160 

  2. Aichholzer, J., & Zeglovits, E. (2015). Balancierte Kurzskala autoritärer Einstellungen (B-RWA-6) [Balanced short scale of authoritarian attitudes]. Zusammenstellung sozialwissenschaftlicher Items und Skalen (ZIS). DOI: https://doi.org/10.6102/zis239 

  3. Bebbington, K., MacLeod, C., Ellison, T. M., & Fay, N. (2017). The sky is falling: Evidence of a negativity bias in the social transmission of information. Evolution and Human Behavior, 38(1), 92–101. DOI: https://doi.org/10.1016/j.evolhumbehav.2016.07.004 

  4. Blaine, T., & Boyer, P. (2018). Origins of sinister rumors: A preference for threat-related material in the supply and demand of information. Evolution and Human Behavior, 39(1), 67–75. DOI: https://doi.org/10.1016/j.evolhumbehav.2017.10.001 

  5. Bogart, L. M., Glenn, W., Galvan, F. H., & Banks, D. (2010). Conspiracy beliefs about HIV are related to antiretroviral treatment nonadherence among African American Men with HIV. Journal of Acquired Immune Deficiency Syndromes, 53(5), 648–655. DOI: https://doi.org/10.1097/QAI.0b013e3181c57dbc 

  6. Bogart, L. M., & Thorburn, S. (2005). Are HIV/AIDS conspiracy beliefs a barrier to HIV prevention among African Americans? Journal of Acquired Immune Deficiency Syndromes, 38(2), 213–218. DOI: https://doi.org/10.1097/00126334-200502010-00014 

  7. Bohn, R., & Short, J. E. (2012). Measuring consumer information. International Journal of Communication, 6, 980–1000. https://ijoc.org/index.php/ijoc/article/viewFile/1566/743 

  8. Bost, P. R., & Prunier, S. G. (2013). Rationality in conspiracy beliefs: The role of perceived motive. Psychological Reports, 113(1), 118–128. DOI: https://doi.org/10.2466/17.04.PR0.113x17z0 

  9. Boudry, M., & Braeckman, J. (2012). How convenient! The epistemic rationale of self-validating belief systems. Philosophical Psychology, 25(3), 341–364. DOI: https://doi.org/10.1080/09515089.2011.579420 

  10. Bowman, K., & Rugg, A. (2013). Public opinion on conspiracy theories. American Enterprise Institute for Public Policy Research. https://www.aei.org/wp-content/uploads/2013/11/-public-opinion-on-conspiracy-theories_181649218739.pdf 

  11. Boyer, P., & Parren, N. (2015). Threat-related information suggests competence: A possible factor in the spread of rumors. PLoS ONE, 10(6), e0128421. DOI: https://doi.org/10.1371/journal.pone.0128421 

  12. Brown, D. (2003). The Da Vinci code. Doubleday. 

  13. Chaiken, S. (1980). Heuristic versus systematic information processing and the use of source versus message cues in persuasion. Journal of Personality and Social Psychology, 39(5), 752–766. DOI: https://doi.org/10.1037/0022-3514.39.5.752 

  14. Clarke, S. (2002). Conspiracy theories and conspiracy theorizing. Philosophy of the Social Sciences, 32(2), 131–150. DOI: https://doi.org/10.1177/004931032002001 

  15. Dagnall, N., Drinkwater, K., Parker, A., Denovan, A., & Parton, M. (2015). Conspiracy theory and cognitive style: A worldview. Frontiers in Psychology, 6, 206. DOI: https://doi.org/10.3389/fpsyg.2015.00206 

  16. Dalbert, C., Montada, L., & Schmitt, M. (1987). Glaube an eine gerechte Welt als Motiv: Validierungskorrelate zweier Skalen [Belief in a just world: Validation correlates of two scales]. Psychologische Beiträge, 29, 596–615. http://hdl.handle.net/20.500.11780/743 

  17. Douglas, K. M., Sutton, R. M., Callan, M. J., Dawtry, R. J., & Harvey, A. J. (2016). Someone is pulling the strings: Hypersensitive agency detection and belief in conspiracy theories. Thinking & Reasoning, 22(1), 57–77. DOI: https://doi.org/10.1080/13546783.2015.1051586 

  18. Douglas, K. M., Sutton, R. M., & Cichocka, A. (2017). The psychology of conspiracy theories. Current Directions in Psychological Science, 26(6), 538–542. DOI: https://doi.org/10.1177/0963721417718261 

  19. Douglas, K. M., Uscinski, J. E., Sutton, R. M., Cichocka, A., Nefes, T., Ang, C. S., & Deravi, F. (2019). Understanding conspiracy theories. Political Psychology, 40(S1), 3–35. DOI: https://doi.org/10.1111/pops.12568 

  20. Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. DOI: https://doi.org/10.3758/BF03193146 

  21. Franks, B., Bangerter, A., & Bauer, M. W. (2013). Conspiracy theories as quasi-religious mentality: An integrated account from cognitive science, social representations theory, and frame theory. Frontiers in Psychology, 4, 424. DOI: https://doi.org/10.3389/fpsyg.2013.00424 

  22. Frenken, M., & Imhoff, R. (2021). A uniform conspiracy mentality or differentiated reactions to specific conspiracy beliefs? Evidence from latent profile analyses. Department of Social and Legal Psychology, Johannes Gutenberg-University Mainz. 

  23. Gadarian, S. K., & Albertson, B. (2014). Anxiety, immigration, and the search for information. Political Psychology, 35(2), 133–164. DOI: https://doi.org/10.1111/pops.12034 

  24. Gebauer, F., Raab, M. H., & Carbon, C.-C. (2016). Conspiracy formation is in the detail: On the interaction of conspiratorial predispositions and semantic cues. Applied Cognitive Psychology, 30(6), 917–924. DOI: https://doi.org/10.1002/acp.3279 

  25. Giner-Sorolla, R., Carpenter, T., Lewis, N. A., Montoya, A. K., Aberson, C. L., Bostyn, D. H., Conrique, B. G., Ng, B. W., Reifman, A., Schoemann, A. M., & Soderberg, C. (2020). Power to detect what? Considerations for planning and evaluating sample size. Open Science Framework. https://osf.io/d3v8t/download 

  26. Gopnik, A. (2000). Explanation as orgasm and the drive for causal knowledge: The function, evolution, and phenomenology of the theory formation system. In F. C. Keil & R. A. Wilson (Eds.), Explanation and cognition (pp. 299–323). The MIT Press. 

  27. Graumann, C. F., & Moscovici, S. (1987). Changing conceptions of conspiracy. Springer. DOI: https://doi.org/10.1007/978-1-4612-4618-3 

  28. Greenwald, A. G. (1976). Within-subjects designs: To use or not to use? Psychological Bulletin, 83(2), 314–320. DOI: https://doi.org/10.1037/0033-2909.83.2.314 

  29. Grimes, D. R. (2016). On the viability of conspiratorial beliefs. PLoS ONE, 11(1), e0147905. DOI: https://doi.org/10.1371/journal.pone.0147905 

  30. Grzesiak-Feldman, M. (2013). The effect of high-anxiety situations on conspiracy thinking. Current Psychology, 32(1), 100–118. DOI: https://doi.org/10.1007/s12144-013-9165-6 

  31. Hagmayer, Y., & Sloman, S. A. (2009). Decision makers conceive of their choices as interventions. Journal of Experimental Psychology: General, 138(1), 22–38. DOI: https://doi.org/10.1037/a0014585 

  32. Haselton, M. G., & Buss, D. M. (2000). Error management theory: A new perspective on biases in cross-sex mind reading. Journal of Personality and Social Psychology, 78(1), 81–91. DOI: https://doi.org/10.1037//0022-3514.78.1.81 

  33. Hauser, D. J., & Schwarz, N. (2015). It’s a trap! Instructional manipulation checks prompt systematic thinking on “tricky” tasks. SAGE Open. DOI: https://doi.org/10.1177/2158244015584617 

  34. Imhoff, R., & Bruder, M. (2014). Speaking (un-)truth to power: Conspiracy mentality as a generalised political attitude. European Journal of Personality, 28(1), 25–43. DOI: https://doi.org/10.1002/per.1930 

  35. Imhoff, R., Dieterle, L., & Lamberty, P. (2021). Resolving the puzzle of conspiracy worldview and political activism: Belief in secret plots decreases normative but increases nonnormative political engagement. Social Psychology and Personality Science, 12(1), 71–79. DOI: https://doi.org/10.1177/1948550619896491 

  36. Imhoff, R., & Lamberty, P. (2018). How paranoid are conspiracy believers? Toward a more fine-grained understanding of the connect and disconnect between paranoia and belief in conspiracy theories. European Journal of Social Psychology, 48(7), 909–926. DOI: https://doi.org/10.1002/ejsp.2494 

  37. Imhoff, R., & Lamberty, P. (2020a). A bioweapon or a hoax? The link between distinct conspiracy beliefs about the Coronavirus disease (COVID-19) outbreak and pandemic behavior. Social Psychological and Personality Science, 11(8), 1110–1118. DOI: https://doi.org/10.1177/1948550620934692 

  38. Imhoff, R., & Lamberty, P. (2020b). Conspiracy beliefs as psycho-political reactions to perceived power. In M. Butter & P. Knight (Eds.), Routledge Handbook of Conspiracy Theories. Routledge. DOI: https://doi.org/10.4324/9780429452734-2_4 

  39. Jerit, J., & Barabas, J. (2012). Partisan perceptual bias and the information environment. The Journal of Politics, 74(3), 672–684. DOI: https://doi.org/10.1017/S0022381612000187 

  40. Johnstone, L., Boyle, M., Cromby, J., Dillon, J., Harper, D., Kinderman, P., Longden, E., Pilgrim, D., & Read, J. (2018). The power threat meaning framework: Towards the identification of patterns in emotional distress, unusual experiences and troubled or troubling behavior, as an alternative to functional psychiatric diagnosis. The British Psychological Society. www.bps.org.uk/PTM-Main 

  41. Jolley, D., Douglas, K. M., & Sutton, R. M. (2018). Blaming a few bad apples to save a threatened barrel: The system-justifying function of conspiracy theories. Political Psychology, 39(2), 465–478. DOI: https://doi.org/10.1111/pops.12404 

  42. Keeley, B. L. (1999). Of conspiracy theories. The Journal of Philosophy, 96(3), 109–126. DOI: https://doi.org/10.2307/2564659 

  43. Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. DOI: https://doi.org/10.1037/0033-2909.108.3.480 

  44. Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863. DOI: https://doi.org/10.3389/fpsyg.2013.00863 

  45. Lipton, P. (2003). Inference to the best explanation. Routledge. DOI: https://doi.org/10.4324/9780203470855 

  46. Lombrozo, T. (2006). The structure and function of explanations. Trends in Cognitive Sciences, 10(10), 464–470. DOI: https://doi.org/10.1016/j.tics.2006.08.004 

  47. Lombrozo, T. (2016). Explanatory preferences shape learning and inference. Trends in Cognitive Sciences, 20(10), 748–759. DOI: https://doi.org/10.1016/j.tics.2016.08.001 

  48. Lyons, B., Merola, V., & Reifler, J. (2019). Not just asking questions: Effects of implicit and explicit conspiracy information about vaccines and genetic modification. Health Communication, 34(14), 1741–1750. DOI: https://doi.org/10.1080/10410236.2018.1530526 

  49. Meuer, M., & Imhoff, R. (2021). Believing in hidden plots is associated with decreased behavioral trust: Conspiracy belief as greater sensitivity to social threat or insensitivity towards its absence? Journal of Experimental Social Psychology, 93, 104081. DOI: https://doi.org/10.1016/j.jesp.2020.104081 

  50. Mirabile, P., & Horne, Z. (2019). Explanatory virtues and belief in conspiracy theories. PsyArXiv. DOI: https://doi.org/10.31234/osf.io/5cu2g 

  51. Neuberg, S. L., Kenrick, D. T., & Schaller, M. (2011). Human threat management systems: self-protection and disease avoidance. Neuroscience and Biobehavioral Reviews, 35(4), 1042–1051. DOI: https://doi.org/10.1016/j.neubiorev.2010.08.011 

  52. Öhman, A., Flykt, A., & Esteves, F. (2001). Emotion drives attention: Detecting the snake in the grass. Journal of Experimental Psychology: General, 130(3), 466–478. DOI: https://doi.org/10.1037//0096-3445.130.3.466 

  53. Oliver, J. E., & Wood, T. J. (2014a). Conspiracy theories and the paranoid style(s) of mass opinion. American Journal of Political Science, 58(4), 952–966. DOI: https://doi.org/10.1111/ajps.12084 

  54. Oliver, J. E., & Wood, T. J. (2014b). Medical conspiracy theories and health behaviors in the United States. JAMA Internal Medicine, 174(5), 817–818. DOI: https://doi.org/10.1001/jamainternmed.2014.190 

  55. Olson, J. M., & Janes, L. M. (2002). Vigilance for differences: Heightened impact of differences on surprise. Personality and Social Psychology Bulletin, 28(8), 1084–1093. DOI: https://doi.org/10.1177/01461672022811007 

  56. Raab, M. H., Auer, N., Ortlieb, S. A., & Carbon, C. C. (2013). The Sarrazin effect: The presence of absurd statements in conspiracy theories makes canonical information less plausible. Frontiers in Psychology, 4, 453. DOI: https://doi.org/10.3389/fpsyg.2013.00453 

  57. Schäfer, T., & Schwarz, M. A. (2019). The meaningfulness of effect sizes in psychological research: Differences between sub-disciplines and the impact of potential biases. Frontiers in Psychology, 10, 813. DOI: https://doi.org/10.3389/fpsyg.2019.00813 

  58. Schaller, M., Park, J. H., & Müller, A. (2003). Fear of the dark: Interactive effects of beliefs about dangers and ambient darkness on ethnic stereotypes. Personality and Social Psychology Bulletin, 29(5), 637–649. DOI: https://doi.org/10.1177/0146167203029005008 

  59. Schönbrodt, F. D., & Perugini, M. (2013). At what sample sizes do correlations stabilize? Journal of Research in Personality, 47(5), 609–612. DOI: https://doi.org/10.1016/j.jrp.2013.05.009 

  60. Simonsohn, U. (2014, March 12). No-way interaction. Data Colada. http://datacolada.org/17. DOI: https://doi.org/10.15200/winn.142559.90552 

  61. Slovic, P. (2016). The perception of risk. Taylor and Francis. DOI: https://doi.org/10.4324/9781315661773 

  62. Stojanov, A., & Halberstadt, J. (2020). Does lack of control lead to conspiracy beliefs? A meta-analysis. European Journal of Social Psychology, 50(5), 955–968. DOI: https://doi.org/10.1002/ejsp.2690 

  63. Sullivan, D., Landau, M. J., & Rothschild, Z. K. (2010). An existential function of enemyship: Evidence that people attribute influence to personal and political enemies to compensate for threats to control. Journal of Personality and Social Psychology, 98(3), 434–449. DOI: https://doi.org/10.1037/a0017457 

  64. Sunstein, C. R., & Vermeule, A. (2009). Conspiracy theories: Causes and cures. Journal of Political Philosophy, 17(2), 202–227. DOI: https://doi.org/10.1111/j.1467-9760.2008.00325.x 

  65. Swami, V., & Furnham, A. (2014). Political paranoia and conspiracy theories. In J.-W. Van Prooijen & P. Van Lange (Eds.), Power politics, and paranoia: Why people are suspicious of their leaders (pp. 218–236). Cambridge University Press. DOI: https://doi.org/10.1017/CBO9781139565417.016 

  66. Swami, V., Voracek, M., Stieger, S., Tran, U. S., & Furnham, A. (2014). Analytical thinking reduces belief in conspiracy theories. Cognition, 133(3), 572–585. DOI: https://doi.org/10.1016/j.cognition.2014.08.006 

  67. Taleb, N. N. (2001). Fooled by randomness. Random House. 

  68. Taylor, S. E. (1991). Asymmetrical effects of positive and negative events: the mobilization-minimization hypothesis. Psychological Bulletin, 110(1), 67–85. DOI: https://doi.org/10.1037/0033-2909.110.1.67 

  69. Uscinski, J. E., Klofstad, C., & Atkinson, M. D. (2016). What drives conspiratorial beliefs? The role of informational cues and predisposition. Political Research Quarterly, 69(1), 57–71. DOI: https://doi.org/10.1177/1065912915621621 

  70. Uscinski, J. E., & Parent, J. M. (2014). American Conspiracy Theories. Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780199351800.001.0001 

  71. Van der Wal, R. C., Sutton, R. M., Lange, J., & Braga, J. P. (2018). Suspicious binds: Conspiracy thinking and tenuous perceptions of causal connections between co-occurring and spuriously correlated events. European Journal of Social Psychology, 48(7), 970–989. DOI: https://doi.org/10.1002/ejsp.2507 

  72. Van Prooijen, J.-W. (2020). An existential threat model of conspiracy theories. European Psychologist, 25(1), 16–25. DOI: https://doi.org/10.1027/1016-9040/a000381 

  73. Van Prooijen, J.-W., & Acker, M. (2015). The influence of control on belief in conspiracy theories: Conceptual and applied extensions. Applied Cognitive Psychology, 29(5), 753–761. DOI: https://doi.org/10.1002/acp.3161 

  74. Van Prooijen, J.-W., & Douglas, J. M. (2017). Conspiracy theories as part of history: The role of societal crisis situations. Memory Studies, 10(3), 323–333. DOI: https://doi.org/10.1177/1750698017701615 

  75. Van Prooijen, J.-W., Douglas, K. M., & De Inocencio, C. (2018). Connecting the dots: Illusory pattern perception predicts belief in conspiracies and the supernatural. European Journal of Social Psychology, 48, 320–335. DOI: https://doi.org/10.1002/ejsp.2331 

  76. Van Prooijen, J.-W., & Jostmann, N. B. (2013). Belief in conspiracy theories: The influence of uncertainty and perceived morality. European Journal of Social Psychology, 43(1), 109–115. DOI: https://doi.org/10.1002/ejsp.1922 

  77. Van Prooijen, J.-W., & Van Vugt, M. (2018). Conspiracy theories: Evolved functions and psychological mechanisms. Perspectives on Psychological Science, 13(6), 770–788. DOI: https://doi.org/10.1177/1745691618774270 

  78. Wänke, M., & Hansen, J. (2015). Relative processing fluency. Current Directions in Psychological Science, 24(3), 195–199. DOI: https://doi.org/10.1177/0963721414561766 

  79. Wood, M. J., & Douglas, K. M. (2013). “What about building 7?” A social psychological study of online discussion of 9/11 conspiracy theories. Frontiers in Psychology, 4, 409. DOI: https://doi.org/10.3389/fpsyg.2013.00409 

  80. Wood, M. J., Douglas, K. M., & Sutton, R. M. (2012). Dead and alive: Beliefs in contradictory conspiracy theories. Social Psychological and Personality Science, 3(6), 767–773. DOI: https://doi.org/10.1177/1948550611434786 

  81. Wyatt, E. (2005, November 4). ‘Da Vinci Code’ losing best-seller status. The New York Times. https://www.nytimes.com/2005/11/04/books/da-vinci-code-losing-bestseller-status.html 

  82. Zaller, J. R. (1992). The nature and origins of mass opinion. Cambridge University Press. DOI: https://doi.org/10.1017/CBO9780511818691 

comments powered by Disqus