Distinguishing between aldosterone-producing adenomas and non-functional adrenocortical adenomas using the YOLOv5 network

Acta Radiol. 2024 May 20:2841851241251446. doi: 10.1177/02841851241251446. Online ahead of print.

Abstract

Background: You Only Look Once version 5 (YOLOv5), a one-stage deep-learning (DL) algorithm for object detection and classification, offers high speed and accuracy for identifying targets.

Purpose: To investigate the feasibility of using the YOLOv5 algorithm to non-invasively distinguish between aldosterone-producing adenomas (APAs) and non-functional adrenocortical adenomas (NF-ACAs) on computed tomography (CT) images.

Material and methods: A total of 235 patients who were diagnosed with ACAs between January 2011 and July 2022 were included in this study. Of the 215 patients, 81 (37.7%) had APAs and 134 (62.3%) had NF-ACAs' they were randomly divided into either the training set or the validation set at a ratio of 9:1. Another 20 patients, including 8 (40.0%) with APA and 12 (60.0%) with NF-ACA, were collected for the testing set. Five submodels (YOLOv5n, YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x) of YOLOv5 were trained and evaluated on the datasets.

Results: In the testing set, the mAP_0.5 value for YOLOv5x (0.988) was higher than the values for YOLOv5n (0.969), YOLOv5s (0.965), YOLOv5m (0.974), and YOLOv5l (0.983). The mAP_0.5:0.95 value for YOLOv5x (0.711) was also higher than the values for YOLOv5n (0.587), YOLOv5s (0.674), YOLOv5m (0.671), and YOLOv5l (0.698) in the testing set. The inference speed of YOLOv5n was 2.4 ms in the testing set, which was the fastest among the five submodels.

Conclusion: The YOLOv5 algorithm can accurately and efficiently distinguish between APAs and NF-ACAs on CT images, especially YOLOv5x has the best identification performance.

Keywords: Non-functional adrenocortical adenoma; aldosterone-producing adenoma; deep learning; differential diagnosis.