NeoCLIP: a self-supervised foundation model for the interpretation of neonatal radiographs
NeoCLIP: a self-supervised foundation model for the interpretation of neonatal radiographs

NeoCLIP: a self-supervised foundation model for the interpretation of neonatal radiographs

NPJ Digit Med. 2025 Sep 24;8(1):570. doi: 10.1038/s41746-025-01922-6.

ABSTRACT

Deep learning has proven to be an excellent tool for interpreting medical images in adults and children. However, it is still underdeveloped for neonates. This study developed NeoCLIP, a novel deep contrastive learning model designed to detect pathologies and medical devices on neonatal radiographs. We retrospectively studied 4629 infants admitted to a neonatal intensive care unit in Boston, MA (2008-2023), compiling 20,154 radiographs and 15,795 corresponding reports. The cohort was randomized into training (80%), validation (10%), and test (10%) sets. NeoCLIP was trained to identify 15 radiological features and 5 medical devices relevant to neonatal intensive care. NeoCLIP achieved higher AUROC compared to controls in all labels except portal venous gas. Incorporating demographic data improved model performance, though not statistically significantly. NeoCLIP is the first deep learning model tailored for the interpretation of neonatal radiographs, surpassing the efficacy of comparable adult models.

PMID:40993183 | DOI:10.1038/s41746-025-01922-6