Exploratory Reports at IRSP: Guidelines for Authors
Exploratory Reports (ERs) is a format for empirical submissions that tend to address relatively open research questions, without strong a priori predictions of hypotheses. These studies are abductive (=often starting with an observation) and inductive/hypothesis-generating (=going from data to hypothesis). This means that authors can do as many analyses as they would like on a dataset, as long as they openly report it. These analyses should however generate predictions, and in some cases, these predictions can and should already be tested. At this stage, we are limiting the ER to two types: Machine learning and cross-validation (We include machine learning as a separate ER type, even though it often includes cross-validation (but not always, as in the case of conditional random forests or autoencoding)).
Cross-validation can be done using more traditional, inferential statistics, machine learning, or another analysis approach. For research using cross-validation, we expect authors to submit a results-blind submission for the validation part of their manuscript. At least one validation set is required, a second validation set highly encouraged. The analyses for the validation sets will be blinded to reduce publication bias. Authors are also asked not to analyze data in their validation sets prior to submission. For those unfamiliar with exploratory research, we recommend reading Yarkoni and Westfall, viewing Rick Klein’s primer and running through tutorials Klein made available. These tutorials include analysis scripts for cross-validation. An analysis script for a type of supervised machine learning (conditional random forests) applied in social psychology is available from IJzerman et al. (2018). Typical exploratory reports include multiple tests and variables that go beyond basic hypothesis-testing.
Central to ERs is the generation of hypotheses for confirmatory research. We therefore expect the discussions of ERs to include the following aspects:
- - A hypothesis generated from the research
- - A necessary sample size needed to test the hypothesis generated from the ER
- - A section constraining the generality of the authors’ hypothesis/hypotheses (see e.g., Simons et al., 2018)
One of the ways we plan to reduce the workload for authors, editors, and reviewers, is by letting our editors create a project on the Open Science Framework (OSF) after a first-page overview submitted to the journal. This will allow authors and reviewers to work more efficiently by adopting a transparent “research workflow”. All reviews and editorial letters will be stored and will be open to our readers. Initial submissions will be triaged by the editorial team for suitability in Stage 1. For this stage, authors are requested to e-mail (rips-irsp@ulb.ac.be) a one-page, bullet-pointed overview prior to submitting their ER for full review. We will invite authors of proposals that pass triage to submit a full manuscript for in-depth peer review (Stage 2).
Authors can collect their own data, but are also encouraged to use existing datasets (e.g., the ManyLabs datasets, see e.g., 1 or 2; the Human Penguin Project; the European Social Survey; LISS Panel data; Eurobarometer, International Social Survey Programme, British Household Panel Study, World Values Survey, American National Election Studies. A more comprehensive list of datasets can be found here; we are open to suggestions for other datasets to be advertised here).
Exploratory Reports Editors:
Hans IJzerman, Université Grenoble Alpes, France
Lorne Campbell, Western University, Canada
Exploratory Reports Editorial Review Board:
Thomas Pollet - Northumbria University, United Kingdom
Robert McIntosh - University of Edinburgh, Scotland
Samantha Joel - Western University, Canada
Rick Klein - Université Grenoble Alpes, France
Yizhar Lavner - Tel Hai College, Israel
Stage 1: One-page proposal of the ER
Authors are requested to e-mail (rips-irsp@ulb.ac.be) a one-page, bullet-point overview to the journal prior to submitting their ER for full review. One editor (Lorne Campbell or Hans IJzerman) will provide a quick turnaround on suitability of the proposed ER for the journal. If the proposal is deemed suitable for the journal, one of the editors will create an Open Science Framework (OSF; www.osf.io) component where the author(s) can prepare their Stage 2 manuscript. Only after approval of Stage 1 can authors move on to Stage 2.
Page overview preparation guidelines – Stage 1
The one page proposal should be (maximum) ¾ of a page with bullet points about the research and one paragraph making a brief case for consideration of the ER. Authors should consider whether they will have the necessary resources to conduct the research, as this is a criterion at Stage 2. The one-page proposal should be e-mailed to rips-irsp@ulb.ac.be.
Stage 2: Full manuscript submission and review
Stage 2 submissions will be prepared in manuscript form, together with an OSF project created by the editors after Stage 1 approval. The authors should prepare a brief cover letter for the journal submission system that includes their “view only” link to their project page. Once the study, or reanalysis, is complete, authors prepare and re-submit their manuscript for full review via https://www.rips-irsp.com/submit/start/, with the following additions:
Cover letter. The ER cover letter must confirm:
- - That the manuscript includes a link to the public archive containing anonymized study data, digital materials/code, and the laboratory log. The cover letter should state the page number in the manuscript that lists the URL.
- - The page number that includes the hypothesis generated from the research.
- - The page number that includes the necessary sample size to test the hypothesis generated from the research.
- - The page number that includes the Constraints On Generality.
- - Whether the manuscript is a machine learning or a cross-validation type ER.
- - In case of cross-validation, a confirmation that they did not yet analyze the second partition of the data.
Manuscript:
The manuscript is comparable to most manuscripts written in APA. It should include the following information:
Background, Rationale, and Methods:
- - The background and rationale should provide justification for the proposed exploratory study, including the selection of variables, study design, and proposed analytic framework.
- - The proposed methods and procedures should be sufficient to enable direct replication. Close replication is facilitated by creating an OSF project along the lines of our exploratory research template.
Exploratory Results & Discussion
- - Authors should present their data fully, giving consideration to the most effective and comprehensive means of data visualisation.
- - Authors should present their data openly, including apparent anomalies, and explore the robustness and limits of the main patterns of interest.
- - Confidence or credibility intervals are encouraged, but should not be used to support binary existential claims. Such intervals can instead support estimates of roughly how much there is of something. The use of raw effect sizes, instead of or in addition to standardised effect sizes, is encouraged, to inform interpretation.
- - Exploratory results should be reported in one section, followed immediately by a section outlining the hypotheses generated from the research and a sample size calculation for follow up research that takes into account the precision of the obtained effect size estimate.
- - During cross-validation, hypotheses can be generated for testing in a second section entitled Confirmatory Results. Analyses of the hold-out set should not yet be conducted at this stage of the research process in order to reduce chances of publication bias. The hold-out set should be described fully in the analyses section, but XX should be placed instead of the results. The authors can choose to register their hypotheses on the Open Science Framework for a Confirmatory Results section. The results of the hold-out set will not be sent out for full review, but will be reviewed by the editor after the analyses are conducted following acceptance of the Confirmatory Results-blind manuscript.
- - For cross-validation, authors will have to certify in their cover letter that they did not run any analyses on the second part of their data and that they only ran analyses specified in their pre-registration, similar to the process of our Confirmatory Reports.
- - Parameter estimation rather than hypothesis-testing, whether Bayesian or frequentist, is encouraged in the exploratory datasets. Nevertheless, if authors utilize null-hypothesis significance tests as heuristics for their tests they are required to 1) report exact p values in two decimal places and effect sizes and 2) include a statement that p-values from their research cannot be used for meta-analyses.
OSF Project
The OSF project created by the editors will include the following components:
Component 1: Laboratory log (if available) and digital study materials
- - In case of original data, authors are required to submit digital study materials and laboratory log.
- - In the case of secondary data, authors are required to link to codebooks and study materials made available from the original dataset.
Component 2: Submission of anonymised raw data or secondary data
- - The default is that anonymised raw data and digital study materials is made freely available in a public repository/archive with a link provided within the ER manuscript. Authors are free to connect any repository to the OSF that renders data and materials freely and publicly accessible and provides a digital object identifier (DOI) to ensure that the data remain persistent, unique and citable. Potential repositories include (but are not limited to), OSF, Figshare, Harvard Dataverse, and Dryad. For a comprehensive list of available data repositories, see http://www.re3data.org/. Only in exceptional cases (and with proper justification and approval by the editors) are data allowed to not not made available. In such cases, a type of summary data should still be made available.
- - Raw data must be accompanied by guidance notes (i.e., meta-data), where required, to assist other scientists in replicating the analysis pipeline. Authors are required to upload any relevant analysis scripts and other digital experimental materials that would assist in replication.
- - Any supplementary figures, tables, or other text (such as supplementary methods) can either be included as standard supplementary information that accompanies the paper, or they can be archived together with the data. Please note that the raw data itself should be archived rather than submitted to the journal as supplementary material.
Component 3: Analysis Script
- - Full analysis pipeline, including all preprocessing steps, and a precise description of all analyses. Analyses can be restricted to a well-written and well-commented script and do not have to be described separately in the Wiki of the component.
- - Where appropriate, include proposed guidelines for how decisions will be made during the analytic process.
Component 4: Planned Analyses for Hold-Out Dataset
- - Hypotheses generated based on the exploratory dataset.
- - Sensitivity analyses to determine the smallest effect size that can be detected given the sample size of the hold-out dataset as well as the data analytic technique proposed to test the hypotheses.
- - Machine learning often, but not always, requires cross-validation. For those kinds of machine learning where cross-validation is required, the same logic applies here. For some machine learning approaches (e.g., conditional random forests or autoencoding) we will not require a hold-out set. Authors will need to justify however why they do not have a hold-out set and, like in the cross-validation approach, they will still need to generate a hypothesis from their analyses.
Component 5: Reviewer Component
- - Reviewers will be asked to use the commenting function on the OSF to give precise and concise concrete feedback on the individual components. This feedback will be used to address details that need to be fixed in the study. Reviewers will also be asked to upload a file to this component that contains more general and abstract feedback on the project, giving higher-level feedback on the research. All reviews will be stored and open to our readers.
When evaluating the confirmatory hypotheses for the hold out data in the cross-validation approach, reviewers will be asked to decide:
- Whether the stated hypotheses are reasonably generated based on the analyses.
- Whether the sample size calculation based on the exploratory analyses is reasonable.
- Whether the authors’ conclusions are justified given the data.
Contrary to Confirmatory Reports, reviewers will be informed that editorial decisions will be based on the perceived importance and novelty of the question being asked, but not on the conclusiveness, of the results.