Reported estimates of diagnostic accuracy in ophthalmology conference abstracts were not associated with full-text publication

J Clin Epidemiol. 2016 Nov:79:96-103. doi: 10.1016/j.jclinepi.2016.06.002. Epub 2016 Jun 14.

Abstract

Objective: To assess whether conference abstracts that report higher estimates of diagnostic accuracy are more likely to reach full-text publication in a peer-reviewed journal.

Study design and setting: We identified abstracts describing diagnostic accuracy studies, presented between 2007 and 2010 at the Association for Research in Vision and Ophthalmology (ARVO) Annual Meeting. We extracted reported estimates of sensitivity, specificity, area under the receiver operating characteristic curve (AUC), and diagnostic odds ratio (DOR). Between May and July 2015, we searched MEDLINE and EMBASE to identify corresponding full-text publications; if needed, we contacted abstract authors. Cox regression was performed to estimate associations with full-text publication, where sensitivity, specificity, and AUC were logit transformed, and DOR was log transformed.

Results: A full-text publication was found for 226/399 (57%) included abstracts. There was no association between reported estimates of sensitivity and full-text publication (hazard ratio [HR] 1.09 [95% confidence interval {CI} 0.98, 1.22]). The same applied to specificity (HR 1.00 [95% CI 0.88, 1.14]), AUC (HR 0.91 [95% CI 0.75, 1.09]), and DOR (HR 1.01 [95% CI 0.94, 1.09]).

Conclusion: Almost half of the ARVO conference abstracts describing diagnostic accuracy studies did not reach full-text publication. Studies in abstracts that mentioned higher accuracy estimates were not more likely to be reported in a full-text publication.

Keywords: Diagnostic accuracy studies; Meta-analyses; Ophthalmology; Publication bias; Reporting bias; Sensitivity and specificity; Systematic reviews; Time-lag bias.

MeSH terms

  • Abstracting and Indexing / statistics & numerical data*
  • Congresses as Topic
  • Humans
  • Ophthalmology*
  • Peer Review, Research*
  • Publication Bias / statistics & numerical data*
  • Reproducibility of Results
  • Sensitivity and Specificity