Screen failure data in clinical trials: Are screening logs worth it?

Clin Trials. 2014 Aug;11(4):467-472. doi: 10.1177/1740774514538706. Epub 2014 Jun 12.

Abstract

Background: Clinical trials frequently spend considerable effort to collect data on patients who were assessed for eligibility but not enrolled. The Consolidated Standards of Reporting Trials (CONSORT) guidelines' recommended flow diagram for randomized clinical trials reinforces the belief that the collection of screening data is a necessary and worthwhile endeavor. The rationale for collecting screening data includes scientific, trial management, and ethno-socio-cultural reasons.

Purpose: We posit that the cost of collecting screening data is not justified, in part due to inability to centrally monitor and verify the screening data in the same manner as other clinical trial data.

Methods: To illustrate the effort and site-to-site variability, we analyzed the screening data from a multicenter, randomized clinical trial of patients with transient ischemic attack or minor ischemic stroke (Platelet-Oriented Inhibition in New Transient Ischemic Attack and Minor Ischemic Stroke (POINT)).

Results: Data were collected on over 27,000 patients screened across 172 enrolling sites, 95% of whom were not enrolled. Although the rate of return of screen failure logs was high overall (95%), there were a considerable number of logs that were returned with 'no data to report' (23%), often due to administrative reasons rather than no patients screened.

Conclusion: In spite of attempts to standardize the collection of screening data, due to differences in site processes, multicenter clinical trials face challenges in collecting those data completely and uniformly. The efforts required to centrally collect high-quality data on an extensive number of screened patients may outweigh the scientific value of the data. Moreover, the lack of a standardized definition of 'screened' and the challenges of collecting meaningful characteristics for patients who have not signed consent limits the ability to compare across studies and to assess generalizability and selection bias as intended.