Rigor and reproducibility for data analysis and design in the behavioral sciences

Behav Res Ther. 2020 Mar:126:103552. doi: 10.1016/j.brat.2020.103552. Epub 2020 Jan 16.

Abstract

The rigor and reproducibility of science methods depends heavily on the appropriate use of statistical methods to answer research questions and make meaningful and accurate inferences based on data. The increasing analytic complexity and valuation of novel statistical and methodological approaches to data place greater emphasis on statistical review. We will outline the controversies within statistical sciences that threaten rigor and reproducibility of research published in the behavioral sciences and discuss ongoing approaches to generate reliable and valid inferences from data. We outline nine major areas to consider for generally evaluating the rigor and reproducibility of published articles and apply this framework to the 116 Behaviour Research and Therapy (BRAT) articles published in 2018. The results of our analysis highlight a pattern of missing rigor and reproducibility elements, especially pre-registration of study hypotheses, links to statistical code/output, and explicit archiving or sharing data used in analyses. We recommend reviewers consider these elements in their peer review and that journals consider publishing results of these rigor and reproducibility ratings with manuscripts to incentivize authors to publish these elements with their manuscript.

Keywords: Big data; P-hacking; Reliability; Reproducibility; Statistics.

MeSH terms

  • Behavioral Sciences*
  • Data Analysis*
  • Humans
  • Reproducibility of Results
  • Research Design*