Being accurate about accuracy in verbal deception detection

PLoS One. 2019 Aug 8;14(8):e0220228. doi: 10.1371/journal.pone.0220228. eCollection 2019.

Abstract

Purpose: Verbal credibility assessments examine language differences to tell truthful from deceptive statements (e.g., of allegations of child sexual abuse). The dominant approach in psycholegal deception research to date (used in 81% of recent studies that report on accuracy) to estimate the accuracy of a method is to find the optimal statistical separation between lies and truths in a single dataset. However, this method lacks safeguards against accuracy overestimation.

Method & results: A simulation study and empirical data show that this procedure produces overoptimistic accuracy rates that, especially for small sample size studies typical of this field, yield misleading conclusions up to the point that a non-diagnostic tool can be shown to be a valid one. Cross-validation is an easy remedy to this problem.

Conclusions: We caution psycholegal researchers to be more accurate about accuracy and propose guidelines for calculating and reporting accuracy rates.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Computer Simulation
  • Data Accuracy*
  • Deception*
  • Humans
  • Judgment
  • Language
  • Lie Detection / psychology*
  • Reproducibility of Results
  • Truth Disclosure
  • Verbal Behavior

Grants and funding

This work was supported by a grant from the Dutch Ministry of Security and Justice to BK. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.