Start Submission Become a Reviewer

Reading: Robustness Tests Replicate Corneille et al.’s (2020) Fake News by Repetition Effect

Download

A- A+
Alt. Display

Research article

Robustness Tests Replicate Corneille et al.’s (2020) Fake News by Repetition Effect

Authors:

Jérémy Béna ,

UCLouvain, BE
X close

Olivier Corneille,

UCLouvain, BE
X close

Adrien Mierop,

UCLouvain, BE
X close

Christian Unkelbach

Social Cognition Center Cologne, University of Cologne, DE
X close

Abstract

Corneille et al. (2020) found that repetition increases judgments that statements have been used as fake news on social media. They also found that repetition increases truth judgments and decreases falsehood judgments (i.e., two instantiations of the truth-by-repetition effect). These results supported an ecological explanation of the truth-by-repetition effect better than two alternative accounts. However, the first author of the present article found unsuspected programming issues in Corneille et al.’s experiments. These programming issues introduced confounds that may have been responsible for the results. To estimate whether Corneille et al.’s main findings and claims hold when correcting these issues, the current team agreed on two high-powered preregistered replications of Corneille et al.’s experiments (Ntotal = 540). The results replicate Corneille et al.’s findings, which are more consistent with an ecological account of repetition effects on judgment than the alternative accounts tested in the original publication.

How to Cite: Béna, J., Corneille, O., Mierop, A., & Unkelbach, C. (2022). Robustness Tests Replicate Corneille et al.’s (2020) Fake News by Repetition Effect. International Review of Social Psychology, 35(1), 19. DOI: http://doi.org/10.5334/irsp.683
Handling Editor:
Marie-Pierre Fayant
Université Paris Cité, FR
X close
146
Views
26
Downloads
7
Twitter
  Published on 12 Oct 2022
 Accepted on 23 Aug 2022            Submitted on 26 Jan 2022

Being repeatedly exposed to the same piece of information typically increases its perceived truth (for meta-analysis, see Dechêne et al., 2010; for overviews, see Brashier & Marsh, 2020; Unkelbach et al., 2019). Several explanations have been proposed to account for this ‘truth-by-repetition’ effect (recognition memory: Bacon, 1979; familiarity: Begg et al., 1992; coherent references in memory: Unkelbach & Rom, 2017; for an overview, see Unkelbach et al., 2019). A prominent explanation is processing fluency (Reber & Schwarz, 1999): repeated exposure to information makes the information’s processing more fluent (e.g., Feustel et al., 1983). There are three main accounts for why processing fluency due to repetition might increase people’s subjective truth judgments. First, fluency is a positive experience (e.g., Winkielman et al., 2003), and as truth is also positively connotated, the positive experience might amplify truth judgments (the fluency-positivity account; but see Unkelbach et al., 2011). Second, fluency might amplify any judgment, whether it is, e.g., truth or falsehood (the amplification account; Albrecht & Carbon, 2014, Landwehr & Eckmann, 2020). And third, fluency might be a valid ecological cue for truth in certain ecologies (the ecological account; e.g., Reber & Unkelbach, 2010).

For the ecological account, the information ecology is critical to people’s interpretation of fluency (e.g., Unkelbach & Greifeneder, 2013). We use the term ‘ecology’ in the tradition of Egon Brunswik (1955), who treated an ecology as the ‘objective, external potential offered to the organism’, which provides input that ‘exists prior to and regardless of its recognition or consumption by the responder’ (p. 198). In experiments, ecologies can be provided by the contexts of judgment, as noted by Unkelbach and Greifeneder (2013, pp. 20–21): ‘The fluency influence depends on the ecology in which a judgment is made; in experimental tasks, the ecology is often provided by the questions researchers are asking.’

For the ecological account, the truth-by-repetition effect is typically observed when participants are asked to judge truth without additional information because truth is more common than falsehood in people’s regular ecologies (see Reber & Unkelbach, 2010 for simulations of the influence of fluency in ecologies that differed regarding their factual percentage of true information that is communicated). However, in other ecologies, fluency is not necessarily interpreted as a cue for truth. For instance, when the judgment context refers to social media—an environment where fake news spreads widely (Del Vicario et al., 2016; Vosoughi et al., 2018; see also Juul & Ugander, 2021), the repetition-induced fluency may be used as a cue for ‘fake news’ (Corneille et al., 2020).

Corneille et al. (2020) recently tested these three explanations against each other. Across three experiments, they found evidence consistent with the ecological account. The strongest point relied on two preregistered studies (Experiments 1 & 2). In these experiments, participants read statements in an exposure phase (e.g., ‘Babies have more bones than adults’). These statements were then presented again intermixed with new ones (i.e., that were not displayed in the exposure phase) in a judgment task. Critically, participants did not indicate the statements’ truth but whether the statements have been previously used as fake news on social media. Corneille et al. found that repeated statements were more likely to be perceived as used as fake news on social media than new statements. This finding contradicts the fluency-positivity account for which repetition should reduce or have no effect on ‘fake news on social media’ judgments because ‘fake news’ is negatively connotated. As a result, Corneille et al.’s Experiments 1 and 2 better support the ecological than the fluency-positivity account. However, the amplification account could also explain this ‘fake news by repetition’ effect because it predicts that the fluency-induced repetition would increase judgment on any dimension (whether it is truth, falsehood, or ‘fake news on social media’).

To test the ecological and amplification accounts against each other, Corneille et al. (2020, Experiment 3) had participants judge either the ‘falsehood’ (‘yes, false,’ ‘no, not false’) or the ‘truth’ (‘yes, true,’ ‘no, not true’) of repeated and new statements in an unspecified ecology. The data showed that repetition significantly increased perceived truth and significantly decreased perceived falsehood (although to a lesser extent). This result is incompatible with an amplification account, which would predict an increase of affirmative (‘yes’) responses in both conditions. Because the ecology is unspecified, the ecological account predicts the truth-by-repetition effect in the ‘judge truth’ condition, and it predicts that repetition should not increase ‘yes, false’ judgments in the ‘judge falsehood’ condition.

Across the three experiments and among the three fluency accounts Corneille et al. (2020) tested, the data best supported the ecological account. Experiments 1 and 2 contradicted the fluency-positivity account, and Experiment 3 contradicted the amplification account. This, however, is not to say that no alternative interpretation can be offered. We will come back to this important point in the General Discussion.

Corneille et al.’s (2020) results are important theoretically, as they help separate between competing fluency accounts. They are also important practically, as they may help predict the direction in which repetition influences judgments according to the information ecology. However, concerns regarding the procedural details of the three experiments reported by Corneille et al. (2020) cast doubt on the validity of their methods. To introduce these concerns, we first describe the typical truth-by-repetition procedure.

In a typical truth-by-repetition procedure (see, e.g., Unkelbach et al., 2019), participants first go through an exposure phase in which half of the statements are factually true, and half are factually false. In the exposure phase, a task is administered (e.g., a reading task; an interest rating task) so that participants process the statements, each displayed once. Then, a truth judgment task is administered, with statements presented in the exposure phase (the ‘repeated’ statements) and other statements, not displayed in the exposure phase (the ‘new’ statements). Often, there is the same number of repeated and new statements, and half of each is factually true. As a result, factual truth and repetition are orthogonal at the participant level. This is this typical procedure that Corneille et al. (2020) intended to use, with a ‘fake news on social media’ judgment task rather than a truth judgment task in their Experiments 1 and 2. For instance, Corneille et al. wrote that ‘The presentation phase involved 20 statements (i.e., 10 true and 10 false)’ and that ‘Participants judged 40 statements, 20 of them repeated, and 20 new; orthogonally, half were factually true or false’ (p. 3).

However, reanalyses of Corneille et al.’s (2020) publicly shared data pointed to programming issues that caused significant deviations from the original, intended procedure.1 Specifically, some statements were displayed more than once in the exposure phase: 20 statements were displayed, but not necessarily 20 different statements. As a result, less than 20 different statements were often displayed in the exposure phase. For instance, if one specific statement is displayed two times, only 19 different statements are presented. One important consequence is that the number of different repeated statements was often smaller than the number of new statements. For instance, if only 19 different statements were displayed in the exposure phase, this means that there were 19 repeated and 21 new statements in the judgment task.

Another deviation from the typical procedure was that Corneille et al. (2020) programmed factual truth to be orthogonal to repetition not at the participant level, but at the aggregated level. Orthogonality, however, was not achieved, whether it is at the participant or aggregated level.2

Finally, a last concern is that only part of the responses from Experiments 1 and 2’s judgment phase were correctly encoded by the program.

These deviations are problematic for two reasons. First, they are problematic because the procedure Corneille et al. (2020) used is not the one they intended to use and described in their article. Most importantly, the deviations in the three experiments and encoding error in Experiments 1 and 2 do not allow for a sufficiently controlled test of their hypotheses. As a result, it is unknown whether testing Corneille et al.’s hypotheses in a more controlled, typical truth-by-repetition paradigm would yield the same evidence for the conclusions they drew from their results.

Of note, we see no reason why the issues we found may have artefactually created the results Corneille et al. (2020) found. We also do not see why these issues would have unduly supported the ecological account against alternative accounts such as the fluency-positivity and amplification accounts. We reasoned, however, that close replications of Corneille et al.’s Experiment 2 (which is a higher-powered replication of Experiment 1) and Experiment 3 after fixing programming issues are needed to test the robustness of the effects Corneille et al. found and, as a result, the strength of the support for the ecological account against alternative explanations such as the fluency-positivity and amplification accounts.

We conducted such replication studies at the request of an editor of the journal Cognition to test whether Corneille et al.’s (2020) results can be replicated when the study programs are corrected. The main conclusions of the present experiments are exposed in the corrigendum to Corneille et al. (2020, 2022). Below, we fully report the two experiments. Of importance, we want to stress that the goal of the present replication studies was not to investigate the possible mechanisms responsible for the effects observed by Corneille et al. As we will point out in the General Discussion, the exact processes underlying the fake news by repetition effect may be open to investigation. This, however, should be investigated only if the findings of Corneille et al. were not artefactually produced by their methods which, as we noted above, were subjected to programming issues. We reasoned that the first important step is to examine whether Corneille et al.’s findings survive a more rigorous test in the first place.

Consistent with the original publication, we predicted that repeated statements would be more often categorized as ‘Yes, used as Fake News on social media’ than new statements (Experiment 1) and that repetition would increase perceived truth but not increase perceived falsehood (Experiment 2). Should this be the case, this would not just support conclusions reached in the original article (i.e., that the ecological account is better supported than the fluency-positivity and amplification accounts) but also support the robustness of these conclusions across testing conditions that substantially differ from each other. Hence, the present research should not be considered a mere replication of the original findings, but a replication of these findings under better-controlled condition (i.e., a robustness rather than a replication test).

We report how we determined our sample size, all data exclusions, all manipulations, and all measures in the two replication studies. The preregistration, experiment programs, data, and analyses are publicly available at https://osf.io/qzeas/.

Experiment 1: Replication of Corneille et al.’s Experiment 2

We closely replicated Corneille et al.’s (2020) Experiment 2 after fixing the programming issues we noted in the introduction (failed randomization in the exposure phase; non-orthogonality between repetition and factual truth; partial data encoding). In line with the ecological account and the predictions of Corneille et al., we predicted that repetition would increase the proportions of ‘fake news on social media’ judgments (i.e., the fake news by repetition effect). Following the ecological account rationale, the fake news by repetition effect would be observed because participants interpret fluency induced by repetition as a cue that information might be fake news because, on social media, false and misleading information is common. The amplification account also makes this prediction because it posits that fluency amplifies judgments on any dimension—here, ‘fake news on social media.’ On the other hand, replicating the fake news by repetition effect would contradict the fluency-positivity account. Again, according to this explanation, fluency amplifies truth judgments because both fluency and truth are positively connotated. Thus, fluency should not influence or even reduce the ‘fake news on social media’ judgments, as ‘fake news’ is negatively connotated.

Participants and Design

The design was a 2 (Repetition: Repeated vs. New) × 2 (Factual truth: True vs. False), with the two factors manipulated within participants.

Based on the sample size of Corneille et al. (2020, Experiment 2, N = 152), we aimed for 160 participants. We performed a power analysis with the R package ‘Superpower’ (Lakens & Caldwell, 2021), which allows one to simulate statistical power in factorial designs. The R script of the power analyses is available at https://osf.io/gw4ke. We relied on the means and standard deviations Corneille et al. observed in their Experiment 13 (‘used as fake news on social media’ judgments for repeated statements: M = .52; SD = .3; for new statements: M = .42; SD = .23). Alpha was set to .05 and the repeated-measures correlation was set to .5. The analysis indicated that a sample of N = 160 gives us ample power (1–β > .95) to find the main effect of repetition found by Corneille et al.; given Corneille et al.’s observed effect, 52 participants already yield 95% power. One hundred and sixty participants completed the experiment online on Prolific (65.63% female, seven not reported; Mage = 31.75, SDage = 12.37, one not reported). Participants were English speakers (as per Corneille et al., 2020) and they did not take part in previous truth-by-repetition effect-related studies we conducted. Participants were paid US $ 1.02 for completing the study.

Materials and Procedure

Instructions and materials were identical to Corneille et al. (2020, Experiments 1 & 2). After providing their informed consent, participants first entered the exposure phase with the instruction to read 20 statements (10 true; 10 false, randomly selected from a list of 20 true and 20 false statements). Statements were sequentially displayed in the middle of the screen for 2,500 milliseconds (inter-trial time: 1,000 milliseconds). The statements displayed in the exposure phase were the ‘repeated’ statements. Next, participants entered the judgment phase, where the 20 repeated statements were mixed with 20 new statements (half true) and displayed in a random order. Participants indicated whether yes (by pressing the ‘y’ key) or no (by pressing the ‘n’ key) each statement ‘has been previously used as a Fake News on the social media.’ Participants were then thanked and debriefed. The complete instructions are available in Corneille et al.’s (2020) preregistration of Experiment 1 (https://osf.io/hsv9y/) and in the program of the present experiment. We programmed the experiment with lab.js (Henninger et al., 2021), and we used JATOS (Lange et al., 2015) to run the study online on Prolific.

The critical differences between the present replication study and Corneille et al.’s methodology is that (1) all participants saw as many true and false statements in each repetition condition (vs. less factually false repeated statements than other statements in Corneille et al.); (2) all repeated statements were repeated only once (vs. some statements being selected more than one time in the exposure phase in Corneille et al.), and (3) participants judged the same number of repeated and new statements (vs. more new than repeated statements in Corneille et al.).

Results

As done by Corneille et al. (2020), we computed the proportions of ‘yes, used as Fake News on social media’ at each Repetition × Factual Truth level for each participant.4 We used R (R Core Team, 2021) and analyzed the data with a general linear model using the ‘lme4’ R package (Bates et al., 2015; version 1.1-27.1). Repetition (Repeated vs. New) and Factual truth (True vs. False) were the fixed factors, and participants were entered as a random effect (by-participant random intercepts).

We replicated the results of Corneille et al. (2020): The proportion of ‘yes, used as Fake News on social media’ judgments was larger for Repeated (M = .53; SD = .2) than new (M = .43; SD = .21) statements, F(1, 477) = 47.36, p < .001, η2p = .09, 90%CI = [.05, .13].5 The main results are displayed in Figure 1. Contrary to Corneille et al.’s findings, the effect of factual truth was also significant, F(1, 477) = 13.11, p < .001, η2p = .03, 90%CI = [.01, .06]: The proportion of ‘yes, used as Fake News on social media’ judgments was larger for true (M = .51; SD = .17) than false (M = .46; SD = .16) statements (but see Endnote 4 for a non-significant effect in an additional, non-preregistered generalized linear mixed-effect model). The interaction between Repetition and Factual truth was not significant, F(1, 477) = 0.83, p = .363, η2p = .002, 90%CI = [.00, .01].

Proportions of ‘fake news’ judgments as a function of Repetition and Factual truth
Figure 1 

Proportions of ‘used as fake news on social media’ judgments as a function of Repetition (dashed horizontal line: no bias toward the ‘yes, fake news’ or the ‘no, not fake news’ side) and Factual truth in Experiment 1. The dots are participants’ scores (jittered). The error bars are the 95% confidence intervals, with the mean in between. The distributions are the kernel probability density of the data (trimmed to remain within the range of possible values, 0 to 1).

Discussion

Experiment 1 replicated the critical result observed by Corneille et al. (2020, Experiments 1 & 2). We found the ‘fake news by repetition effect’: Participants judged repeated statements as having been previously used as fake news on social media more than new statements. In information ecologies where fake news is likely to be repeated, fluency may be used as a cue for perceived fake news.

This fake news by repetition effect was predicted both by the ecological and the amplification fluency accounts. For the ecological account, the interpretation of fluency varies with the judgment context. When the task refers to social media (in which fake news is likely to be repeated), the fluency induced by repetition would be used as a cue for ‘fake news.’ For the amplification account, the mechanism is different: fluency would amplify any judgment, whether it is truth, ‘fake news,’ or falsehood. Of importance, the fluency-positivity account does not accommodate the fake news by repetition effect, as this account predicts that repetition should have no effect or should even reduce the proportions of ‘used as fake news on social media’ judgments.

In addition, we found that statements’ factual truth increased ‘used as fake news’ judgments (but see Endnote 4 for a different pattern in an additional, non-preregistered analysis). This result resembles a finding by Unkelbach and Stahl (2009, Experiment 2). Unkelbach and Stahl informed participants that all repeated statements in the exposure phase were false. They found that factually true statements were more likely to be judged as false than factually false statements, regardless of repetition. This follows because factually true information is more likely to be repeatedly encountered than false information and fluently processed as a result. If fluency signals falsehood, then true information should be judged as false more often than false information to the extent that it is not a well-known fact but information that might have been encountered before the experiment.

Of importance, observing the fake news by repetition effect does not allow contrasting the ecological and amplification accounts. Therefore, Corneille et al. conducted Experiment 3, asking participants to judge the truth or falsehood of repeated and new statements in a standard ecology (i.e., without referring to any context, whether it is social media or another one). Here, the two accounts make different predictions. The ecological account predicts that repetition should not increase ‘yes, false’ judgments because, in general ecologies, fluency would not be used as a cue for falsehood. The ecological account would be compatible with either repetition having no effect or reducing the proportions of ‘yes, false’ judgments. Of importance, this amounts to the ecological account predicting an interaction between the judgment condition (judge ‘truth’ or ‘falsehood’) and repetition. The amplification account makes the different prediction that repetition should increase both ‘yes, true’ and ‘yes, false’ judgments (i.e., a main effect of Repetition). Corneille et al.’s Experiment 3 was critical to opposing the ecological and amplification accounts, but it had similar programming issues as their Experiments 1 and 2 (without the data encoding issue). We thus replicated Corneille et al.’s Experiment 3.

Experiment 2: Replication of Corneille et al.’s Experiment 3

Participants and Design

The design was a 2 (Repetition: Repeated vs. New) × 2 (Factual truth: True vs. False) × 2 (Judgment: Judge truth vs. Judge falsehood), with the two first factors manipulated within participants and the last factor manipulated between participants.

Three hundred eighty participants completed the experiment on Prolific (65.26% female, seven not reported; Mage = 31.99, SDage = 11.47, one not reported; n = 199 in the Judge truth condition and n = 181 in the Judge falsehood condition). Participants were English speakers and did not take part in truth-by-repetition effect-related studies we conducted, as per Corneille et al. (2020). Participants were paid US $ 0.74 for completing the study.

We relied on the summary statistics observed by Corneille et al. in their Experiment 3 to estimate power; the R script of the power analyses is available at https://osf.io/gw4ke. As in Experiment 1, we used the R package ‘Superpower’ (Lakens & Caldwell, 2021). The power analyses indicated that a sample of N = 380 (n = 190 in each Judgment condition) gives us sufficient power (1–β > .95) to find the simple effect of Repetition Corneille et al. found in the ‘falsehood’ condition, which is our smallest effect of interest here (Cohen’s d = 0.393). We thus aimed for a sample size of 380 participants; the original sample size in Corneille et al.’s Experiment 3 was N = 200.

Materials and Procedure

Instructions and materials were identical to Corneille et al. (2020, Experiment 3) and highly similar to the present Experiment 1, with the following deviations: Participants were randomly allocated to one of the two Judgment conditions (Judge truth or Judge falsehood) in the judgment phase. Participants indicated whether yes (by pressing the ‘y’ key) or no (by pressing the ‘n’ key) each statement is true (Judge truth condition) or false (Judge falsehood condition). The complete instructions are available in Corneille et al.’s (2020) preregistration of Experiment 3 (https://osf.io/bvfy9/) and in the program of the present experiment (see Experiment 1 for details).

Results

As done by Corneille et al. (2020), we analyzed the proportion of ‘yes’ judgments as a function of Repetition (within participants) and Judgment (between participants). We conducted the exact analysis they performed, which was a mixed ANOVA. As a result, the model differs from the one we tested in Experiment 1 (a linear mixed-effects model).6

We found a significant main effect of Repetition, F(1, 378) = 15.38, p < .001, η2p = .04, 90%CI = [.01, .08]. Overall, repeated statements (M = .57; SD = .24) were associated with more ‘yes’ responses (whether it is ‘yes, true’ [Judge truth] or ‘yes, false’ [Judge falsehood]) than new statements (M = .51; SD = .19). Proportions of ‘yes’ responses were higher in the Judge truth (M = .6; SD = .12) than in the Judge falsehood condition (M = .47; SD = .13), F(1, 378) = 102.6, p < .001, η2p = .21, 90%CI = [.16, .27].

Critically, as predicted, and replicating Corneille et al.’s (2020) results, we found a significant interaction between Repetition and Judgment on the proportions of ‘yes’ judgments, F(1, 378) = 50.04, p < .001, η2p = .12, 90%CI = [.07, .17] (see Figure 27). Repetition increased judgments of truth (Judge truth condition) but decreased judgments of falsehood (Judge falsehood condition). In the ‘Judge truth’ condition, participants judged repeated statements more often as true (M = .69; SD = .17) than new statements (M = .51; SD = .2), F(1, 198) = 66.52, p < .001, η2p = .25, 90%CI = [.17, .33]. We found the reversed effect in the ‘Judge falsehood’ condition: participants judged repeated statements less often as false (M = .44; SD = .23) than new statements (M = .5; SD = .18), F(1, 180) = 5.22, p = .023, η2p = .03, 90%CI = [.002, .08]. Put differently, we found that the two simple effects of Repetition on the proportions of ‘yes’ judgments went in opposite directions in each Judgment condition. However, we cannot know from these tests whether the magnitude of the simple effects, while going in opposite directions, is the same or not.

Proportions of ‘yes’ judgments as a function of Repetition and Judgment condition
Figure 2 

Proportions of ‘yes’ judgments as a function of Repetition (dashed horizontal line: no bias toward the ‘yes’ or the ‘no’ side) and Judgment condition in Experiment 2. The dots are participants’ scores (jittered). The error bars are the 95% confidence intervals, with the mean in between. The distributions are the kernel probability density of the data (trimmed to remain within the range of possible values, 0 to 1).

To test whether the magnitude of the effect of Repetition is the same or not in the Judge truth and Judge falsehood conditions, we repeated the ANOVA above; the difference is that the dependent variable was the proportions of responses associated with truth judgments. In the Judge truth condition, responding ‘yes, true’ indicates a ‘true’ response, while a ‘true’ response is indicated by a ‘no, not false’ response in the Judge falsehood condition. By estimating the effect of Repetition on these proportions, we were able to estimate whether the two simple effects differed in magnitude, besides going in opposite directions. The effect of Repetition was larger in the Judge truth condition than in the Judge falsehood condition, F(1, 378) = 12.83, p < .001, η2p = .03, 90%CI = [.01, .07]. The main effects of Repetition and Judgment were also significant, but less theoretically interesting for the present purpose (Repetition: F(1, 378) = 32.64, p < .001, η2p = .12, 90%CI = [.08, .17]; Judgment: F(1, 378) = 52.59, p < .001, η2p = .08, 90%CI = [.04, .13]).

Discussion

We replicated Corneille et al.’s Experiment 3: Repetition increased perceived truth and decreased perceived falsehood. This pattern of results is compatible with the ecological account and contradicts the amplification account, for which repetition should have increased both perceived truth and falsehood.

General Discussion

Overall, we replicated the critical findings of Corneille et al. (2020)’s Experiment 2 and 3. We found that repetition increased the likelihood that statements are judged as having been used as fake news in a social media information ecology (Experiment 1). When the information ecology was left unspecified, repetition increased perceived truth and, to a smaller extent, decreased perceived falsehood (Experiment 2). These results align with an ecological explanation of fluency effects (Unkelbach, 2006, 2007; Unkelbach & Greifeneder, 2013). Importantly, the present findings do not support the ‘fluency-positivity’ account (e.g., Winkielman et al., 2003; contradicted by the fake news by repetition effect, Experiment 1) and the amplification account (e.g., Albrecht & Carbon, 2014; contradicted by finding that repetition does not increase ‘yes, false’ judgments, Experiment 2). The ecological account was the only one out of the three fluency accounts we tested that was left uncontradicted by Corneille et al.’s and the present experiments.

One interesting finding, deviating from Corneille et al. (2020), is that participants perceived factually true statements more likely to have been used as fake news on social media than false statements, independent of repetition (Experiment 1). Note, however, that this effect was significant in the preregistered analysis (a linear regression with by-participant random intercepts) but not in an additional, non-preregistered analysis (a logistic regression with random intercepts for participants and statements, see Endnote 4). It is possible that making factual truth and repetition orthogonal for all participants (where Corneille et al. had more factually true repeated than factually true statements at the aggregated level) increased the likelihood of detecting the effect of factual truth. In any case, this main effect of statements’ factual truth suggests that, at least in the present sample, participants had some previous knowledge on the statements—a prerequisite condition for factual truth to be a psychologically meaningful factor.

It may seem counterintuitive that participants perceived true statements as more likely to have been used as fake news on social media than false statements. As discussed above (see the discussion of Experiment 1), a fluency explanation, however, accounts for this result (see Unkelbach & Stahl, 2009, Experiment 2). True statements might have been more fluently processed than false ones, resulting in truth-induced fluency being used as a cue in line with the judgment’s instructions; i.e., responding that statements were used as fake news on social media.

Beyond these interesting findings, the present studies show the robustness of Corneille et al.’s (2020) findings. Psychology is characterized by a ‘credibility revolution’ (Vazire, 2018) in which replication studies often call into question past conclusions even using designs very close to the ones of the original studies (e.g., Camerer et al., 2018; Open Science Collaboration, 2015). In this context, we believe that it is not trivial to find converging evidence in line with a theory (here, the ecological account) across experiments that significantly differ from each other. Reporting not just replication failures but also replication successes is a crucial endeavor to collectively gain scientific discernment (i.e., to discriminate true and false effects better). By confirming results and conclusions from Corneille et al. in more controlled experimental settings, the findings of the present two replication studies provide more support for the ecological theorization of repetition effects on truth judgments than for two alternative fluency accounts (fluency-positivity and amplification).

As we noted in the introduction, finding better support for the ecological account than for the two alternative fluency accounts we considered does not mean that other accounts would not explain the results better. We conducted close replication studies without the programming issues identified in Corneille et al. (2020) to ensure, as a first step, that the effects were not artefactually created by the specific methods the original authors used. To this end, we needed to use the same materials, instructions, and statistical analyses that Corneille et al. used. Now that we fully replicated Corneille et al.’s main findings, so that we can be more confident in the existence and interpretation of the effects, future research may aim at challenging why they were observed. We offer some avenues below.

To test the fake news by repetition effect, Corneille et al. asked whether repeated and new statements ‘have been used as fake news on the social media.’ As noted above, we used the same question to best serve the present purpose of close replications. However, framing the question in the past tense can be consequential. This is because repeated statements can be correctly remembered, contrary to new statements (repeated statements are ‘old’ as they were seen in the exposure phase). As a result, it may be the case that participants inferred that repeated statements are more likely to ‘have been previously used’ as anything, whether it is real or fake news, and whether it is on social media or elsewhere. If so, the fake news by repetition effect would not be due to the social media context but to the past tense used in the instructions—hence, it would be a recognition effect. This ‘recognition account’ may be contrasted with the ecological account by varying the specific question participants are asked to answer. For instance, the reference to the past in the question can be removed by asking participants if the statements ‘could be used as fake news on social media.’ By comparing the original and modified instructions, one could estimate how much the reference to the past inflates (or even explains) the fake news by repetition effect.

Another way to understand when the judgment context qualifies the direction of the effect of repetition on judgments is to identify which exact cues reverse or not this effect. We saw here that asking whether statements have been used as fake news on social media is enough to make repeated statements more likely to be judged as ‘having been used as fake news on social media’ than new ones. For the ecological account, this is because the judgment context refers to an ecology in which fake news is widespread and likely to be repeatedly encountered. At first sight, such an explanation seems contradicted by recent studies that implemented judgment contexts that may be reminiscent of social media contexts, and still found the typical truth-by-repetition effect. For instance, the truth-by-repetition effect has been found with false and misleading information (see Pillai & Fazio, 2021) that can be repeatedly encountered on social media, such as fake news (Pennycook et al. 2018), conspiracy statements (Béna et al., 2022), and highly implausible statements (Lacassagne et al., 2022). Studies mimicking social media postings also replicated the truth-by-repetition effect (e.g., Nadarevic et al., 2020; Smelter & Calvillo, 2020). It is noteworthy, however, that although varying the statements and simulating social media postings may influence the judgment context, it does not change the task at hand. The main dependent variable was the same as in typical truth-by-repetition studies (i.e., truth judgments without reference to specific environments). Yet, altering the dependent variable is what we did in Experiment 1 to replicate Corneille et al. (2020, Experiments 1 and 2). These manipulations are difficult to compare because it remains uncertain whether items and contexts reminiscent of social media induce the same judgment context as altering the task participants are asked to perform (e.g., judge whether a statement can be used as fake news). More research is needed to understand which cues may reverse the truth-by-repetition effect, or may be responsible for other puzzling effects such as the fake news by repetition effect.

Conclusion

Two preregistered studies fully replicated the effects reported by Corneille et al. (2020). The replications were motivated by programming issues in the original experiments discovered after they were published. The present studies indicate that Corneille et al.’s results were not artefactually created by the specific designs they implemented. Rather, their effects and conclusions hold in the more controlled designs we used here. As a result, the ecological account, for which fluency’s interpretation depends on the judgment context, comes out better supported than the alternative fluency accounts tested in the original and this research. This account, however, is open to future empirical challenges.

Data Accessibility Statement

The preregistration files, materials, datasets, and analysis scripts of the two experiments are publicly available at https://osf.io/qzeas.

Notes

1We strongly suspect the programming issues originate in Python-to-Javascript conversion problems applying to the version of the OpenSesame/OSWeb program used in Corneille et al.’s studies. The issues went unnoticed by the authors because the R program designed and pretested for the analyses correctly ran on the data output generated by the online platform. Another issue was inherent in the program itself, which sought randomization of statements’ factual truth at the aggregated level (and failed to achieve it because of the abovementioned conversion issue) rather than participants’ level. More information on the issues is available at https://osf.io/asjfr. 

2The reanalyses are available at https://osf.io/gx4pf. 

3Initially, we wanted to use the summary statistics of Corneille et al.’s (2020) Experiment 2 for the power analysis. This is what we indicated in the preregistration. However, we mistakenly used the summary statistics of Corneille et al.’s Experiment 1 rather than Experiment 2. Of importance, whether the power analysis is performed with the summary statistics of Corneille et al.’s Experiment 1 (‘fake news’ judgments for repeated statements: M = .52; SD = .3; for new statements: M = .42; SD = .23) or Experiment 2 (‘fake news’ judgments for repeated statements: M = .56; SD = .31; for new statements: M = .41; SD = .24) yields similar estimates. Based on Corneille et al.’s Experiment 2 summary statistics, N = 160 gives us 1–β > .95, and 50 participants already yield 1–β > .95. 

4Proportions of ‘True’ judgments are commonly computed to analyze the data of truth-by-repetition studies (see, e.g., Unkelbach & Rom, 2017; Unkelbach & Greifeneder, 2018). In the preregistered analyses, we applied this approach to the ‘Yes, used as fake news’ judgments. Proportions were computed as the number of ‘Yes, used as fake news’ judgments divided by the number of responses at each Repetition × Factual truth level. 

Following a request of the Editor, we also conducted a non-preregistered generalized linear mixed-effects (GLMM; a logistic mixed-effects regression) model to analyze the categorical responses without first aggregating responses across statements. We modeled the categorical dependent variable (‘yes, used as fake news on the social media,’ ‘no, not used as fake news on the social media’) as a function of three fixed effects (Repetition and Factual truth—both mean-centered, Sommet & Morselli, 2017, and their interaction) and two random intercepts (participants and statements). We used the R packages ‘lme4’ (Bates et al., 2015; version 1.1-27.1), ‘sjPlot’ (Lüdecke, 2021; version 2.8.10), and ‘ggeffects’ (Lüdecke, 2018; version 1.1.1). The results are available at https://osf.io/gbqup. We report the odds ratios (OR) and their 95% confidence intervals (CI) for each effect below. When CI do not include 1, we will interpret the effect as significant; if they include 1, the effect is not significant. We found a significant main effect of Repetition, OR = 1.61, 95%CI = [1.45, 1.79]. Contrary to the analysis reported in the main text, we did not find a significant effect of Factual truth, OR = 1.28, 95%CI = [0.92, 1.77]. The interaction between Repetition and Factual truth was not significant, OR = 1.16, 95%CI = [0.94, 1.43].

5Please note that we consistently use F-values as test statistics and report the corresponding partial eta-squared values. In Experiment 2, we preregistered t-tests to test the simple effects, but we report their corresponding F-tests instead, for the reason indicated above. 

6As in Experiment 1, we additionally conducted a non-preregistered GLMM (a logistic mixed-effects model). We modeled the categorical dependent variable (‘yes,’ ‘no’) as a function of three fixed effects (Repetition, Judgment, both mean-centered, and their interaction) and two random intercepts (participants and statements). The results are available at https://osf.io/syjnz. The main effect of Repetition (OR = 1.29, 95%CI = [1.21, 1.38]) and Judgment (OR = 1.74, 95%CI = [1.56, 1.94]) were significant again. The critical interaction between Repetition and Judgment found in the mixed ANOVA was again significant, OR = 2.7, 95%CI = [2.37, 3.09]. In the Judge truth condition, the proportion of ‘true’ judgments was higher for repeated (.69; 95%CI = [.67, .72]) than new (M = .51; 95%CI = [.49, .54]) statements. In the Judge falsehood condition, the effect of repetition was reversed; the proportion of ‘true’ judgments was lower for repeated (.44; 95%CI = [.41, .47]) than new (.5; 95%CI = [.47, .53]) statements. 

7Figure 2 and Shapiro-Wilk normality tests suggest that the data distribution at each level is not normal, even if it looks approximately normal. As ANOVAs are often considered robust to the violation of the normality assumption, we nonetheless conducted and reported the preregistered mixed ANOVA. 

Funding Information

This work was supported by an FSR Incoming Postdoctoral Fellowship [IPSY FSR22 MOVE] awarded to Jérémy Béna.

Competing Interests

The authors have no competing interests to declare.

Author Contributions

J. Béna: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Visualization, Writing – original draft. O. Corneille: Methodology, Resources, Writing – review & editing. A. Mierop: Methodology, Resources, Writing – review & editing. C. Unkelbach: Methodology, Resources, Writing – review & editing.

References

  1. Albrecht, S., & Carbon, C. C. (2014). The fluency amplification model: Fluent stimuli show more intense but not evidently more positive evaluations. Acta Psychologica, 148, 195–203. DOI: https://doi.org/10.1016/j.actpsy.2014.02.002 

  2. Bacon, F. T. (1979). Credibility of repeated statements: Memory for trivia. Journal of Experimental Psychology: Human Learning and Memory, 5(3), 241252. DOI: https://doi.org/10.1037/0278-7393.5.3.241 

  3. Bates, D., Maechler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. DOI: https://doi.org/10.18637/jss.v067.i01 

  4. Begg, I. M., Anas, A., & Farinacci, S. (1992). Dissociation of processes in belief: Source recollection, statement familiarity, and the illusion of truth. Journal of Experimental Psychology: General, 121(4), 446–458. DOI: https://doi.org/10.1037/0096-3445.121.4.446 

  5. Béna, J., Rihet, M., Carreras, O., & Terrier, P. (2022, May 7). Repetition could increase the perceived truth of conspiracy theories. [Preprint]. DOI: https://doi.org/10.31234/osf.io/3gc6k 

  6. Brashier, N. M., & Marsh, E. J. (2020). Judging truth. Annual Review of Psychology, 71(1), 499–515. DOI: https://doi.org/10.1146/annurev-psych-010419-050807 

  7. Brunswik, E. (1955). Representative design and probabilistic theory in a functional psychology. Psychological Review, 62(3), 193–217. DOI: https://doi.org/10.1037/h0047470 

  8. Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T. H., Huber, J., Johannesson, M., Kirchler, M., Nave, G., Nosek, B. A., Pfeiffer, T., Altmejd, A., Buttrick, N., Chan, T., Chen, Y., Forsell, E., Gampa, A., Heikensten, E., Hummer, L., Imai, T., Isaksson, S., … Wu, H. (2018). Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature human behaviour, 2(9), 637–644. DOI: https://doi.org/10.1038/s41562-018-0399-z 

  9. Corneille, O., Mierop, A., & Unkelbach, C. (2020). Repetition increases both the perceived truth and fakeness of information: An ecological account. Cognition, 205, 104470. DOI: https://doi.org/10.1016/j.cognition.2020.104470 

  10. Corneille, O., Mierop, A., & Unkelbach, C. (2022). Corrigendum to: Repetition increases both the perceived truth and fakeness of information: An ecological account [Cognition, 205, 2020, 1-6/104470]. Cognition, 220, 104996. DOI: https://doi.org/10.1016/j.cognition.2021.104996 

  11. Dechêne, A., Stahl, C., Hansen, J., & Wänke, M. (2010). The truth about the truth: A meta-analytic review of the truth effect. Personality and Social Psychology Review, 14(2), 238257. DOI: https://doi.org/10.1177/1088868309352251 

  12. Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., … Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554–559. DOI: https://doi.org/10.1073/pnas.1517441113 

  13. Feustel, T. C., Shiffrin, R. M., & Salasoo, A. (1983). Episodic and lexical contributions to the repetition effect in word identification. Journal of Experimental Psychology: General, 112(3), 309–346. DOI: https://doi.org/10.1037/0096-3445.112.3.309 

  14. Henninger, F., Shevchenko, Y., Mertens, U. K., Kieslich, P. J., & Hilbig, B. E. (2021). lab.js: A free, open, online study builder. Behavior Research Methods. DOI: https://doi.org/10.3758/s13428-019-01283-5 

  15. Juul, J. L., & Ugander, J. (2021). Comparing information diffusion mechanisms by matching on cascade size. Proceedings of the National Academy of Sciences, 118(46), e2100786118. DOI: https://doi.org/10.1073/pnas.2100786118 

  16. Lacassagne, D., Béna, J., & Corneille, O. (2022). Is Earth a perfect square? Repetition increases the perceived truth of highly implausible statements. Cognition, 223, 105052. DOI: https://doi.org/10.1016/j.cognition.2022.105052 

  17. Lakens, D., & Caldwell, A. R. (2021). Simulation-based power analysis for factorial analysis of variance designs. Advances in Methods and Practices in Psychological Science, 4(1), 1–14. DOI: https://doi.org/10.1177/2515245920951503 

  18. Landwehr, J. R., & Eckmann, L. (2020). The nature of processing fluency: Amplification versus hedonic marking. Journal of Experimental Social Psychology, 90, 103997. DOI: https://doi.org/10.1016/j.jesp.2020.103997 

  19. Lange, K., Kühn, S., & Filevich, E. (2015). Correction: ‘Just another tool for online studies’ (JATOS): An easy solution for setup and management of web servers supporting online studies. PLOS ONE, 10(7), e0134073. DOI: https://doi.org/10.1371/journal.pone.0134073 

  20. Lüdecke, D. (2018). ‘ggeffects: Tidy data frames of marginal effects from regression models.’ Journal of Open Source Software, 3(26), 772. DOI: https://doi.org/10.21105/joss.00772 

  21. Lüdecke, D. (2021). sjPlot: Data visualization for statistics in social science. R package version 2.8.10. URL: https://CRAN.R-project.org/package=sjPlot 

  22. Nadarevic, L., Reber, R., Helmecke, A. J., & Köse, D. (2020). Perceived truth of statements and simulated social media postings: an experimental investigation of source credibility, repeated exposure, and presentation format. Cognitive Research: Principles and Implications, 5(1). DOI: https://doi.org/10.1186/s41235-020-00251-4 

  23. Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), 1–8. DOI: https://doi.org/10.1126/science.aac4716 

  24. Pennycook, G., Cannon, T. S., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147(12), 1865–1880. DOI: https://doi.org/10.1037/xge0000465 

  25. Pillai, R. M., & Fazio, L. K. (2021). The effects of repeating false and misleading information on belief. WIREs Cognitive Science, 12(6). DOI: https://doi.org/10.1002/wcs.1573 

  26. R Core Team. (2021). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. URL: https://www.R-project.org/ 

  27. Reber, R., & Schwarz, N. (1999). Effects of perceptual fluency on judgments of truth. Consciousness and Cognition, 8(3), 338–342. DOI: https://doi.org/10.1006/ccog.1999.0386 

  28. Reber, R., & Unkelbach, C. (2010). The epistemic status of processing fluency as source for judgments of truth. Review of Philosophy and Psychology, 1(4), 563–581. DOI: https://doi.org/10.1007/s13164-010-0039-7 

  29. Smelter, T. J., & Calvillo, D. P. (2020). Pictures and repeated exposure increase perceived accuracy of news headlines. Applied Cognitive Psychology, 34(5), 1061–1071. DOI: https://doi.org/10.1002/acp.3684 

  30. Sommet, N., & Morselli, D. (2017). Keep calm and learn multilevel logistic modeling: A simplified three-step procedure using Stata, R, Mplus, and SPSS. International Review of Social Psychology, 30, 203–218. DOI: https://doi.org/10.5334/irsp.90 

  31. Unkelbach, C. (2006). The learned interpretation of cognitive fluency. Psychological Science, 17(4), 339–345. DOI: https://doi.org/10.1111/j.1467-9280.2006.01708.x 

  32. Unkelbach, C. (2007). Reversing the truth effect: Learning the interpretation of processing fluency in judgments of truth. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 219–230. DOI: https://doi.org/10.1037/0278-7393.33.1.219 

  33. Unkelbach, C., Bayer, M., Alves, H., Koch, A., & Stahl, C. (2011). Fluency and positivity as possible causes of the truth effect. Consciousness and Cognition, 20(3), 594–602. DOI: https://doi.org/10.1016/j.concog.2010.09.015 

  34. Unkelbach, C., & Greifeneder, R. (2013). A general model of fluency effects in judgment and decision making. In C. Unkelbach & R. Greifeneder (Eds.), The experience of thinking: How the fluency of mental processes influences cognition and behaviour (pp. 11–32). New York: Psychology Press. DOI: https://doi.org/10.4324/9780203078938 

  35. Unkelbach, C., & Greifeneder, R. (2018). Experiential fluency and declarative advice jointly inform judgments of truth. Journal of Experimental Social Psychology, 79, 78–86. DOI: https://doi.org/10.1016/j.jesp.2018.06.010 

  36. Unkelbach, C., Koch, A., Silva, R. R., & Garcia-Marques, T. (2019). Truth by repetition: explanations and implications. Current Directions in Psychological Science, 28(3), 247–253. DOI: https://doi.org/10.1177/0963721419827854 

  37. Unkelbach, C., & Rom, S. C. (2017). A referential theory of the repetition-induced truth effect. Cognition, 160, 110–126. DOI: https://doi.org/10.1016/j.cognition.2016.12.016 

  38. Unkelbach, C., & Stahl, C. (2009). A multinomial modeling approach to dissociate different components of the truth effect. Consciousness and Cognition, 18(1), 22–38. DOI: https://doi.org/10.1016/j.concog.2008.09.006 

  39. Vazire, S. (2018). Implications of the credibility revolution for productivity, creativity, and progress. Perspectives on Psychological Science, 13, 411–417. DOI: https://doi.org/10.1177/1745691617751884 

  40. Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 11461151. DOI: https://doi.org/10.1126/science.aap9559 

  41. Winkielman, P., Schwarz, N., Fazendeiro, T. A., & Reber, R. (2003). The hedonic marking of processing fluency: Implications for evaluative judgment. In J. Musch & K. C. Klauer (Eds.), The psychology of evaluation: Affective processes in cognition and emotion (pp. 189–217). Lawrence Erlbaum Associates Publishers. 

comments powered by Disqus