Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals

J Appl Psychol. 2015 Jan;100(1):194-202. doi: 10.1037/a0036635. Epub 2014 Apr 14.

Abstract

Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials.

MeSH terms

  • Bayes Theorem*
  • Confidence Intervals*
  • Data Interpretation, Statistical*
  • Humans
  • Psychology, Applied / methods*
  • Sample Size