Resources


Software Credentialed Access

Code for generating the HAIM multimodal dataset of MIMIC-IV clinical data and x-rays

Luis R Soenksen, Yu Ma, Cynthia Zeng, Leonard David Jean Boussioux, Kimberly Villalobos Carballo, Liangyuan Na, Holly Wiberg, Michael Li, Ignacio Fuentes, Dimitris Bertsimas

Code for generating the HAIM multimodal dataset of MIMIC-IV clinical data and x-rays

database code multimodality

Published: Aug. 23, 2022. Version: 1.0.1


Database Credentialed Access

Eye Gaze Data for Chest X-rays

Alexandros Karargyris, Satyananda Kashyap, Ismini Lourentzou, Joy Wu, Matthew Tong, Arjun Sharma, Shafiq Abedin, David Beymer, Vandana Mukherjee, Elizabeth Krupinski, Mehdi Moradi

This dataset was a collected using an eye tracking system while a radiologist interpreted and read 1,083 public CXR images. The dataset contains the following aligned modalities: image, transcribed report text, dictation audio and eye gaze data.

convolutional network heatmap eye tracking explainability audio chest cxr chest x-ray radiology machine learning multimodal deep learning

Published: Sept. 12, 2020. Version: 1.0.0


Database Credentialed Access

CXReasonBench: A Benchmark for Evaluating Structured Diagnostic Reasoning in Chest X-rays

Hyungyung Lee, Geon Choi, Jung Oh Lee, Hangyul Yoon, Hyuk Gi Hong, Edward Choi

CheXStruct is an automated pipeline that derives structured diagnostic reasoning steps from chest X-rays. CXReasonBench builds on this to evaluate whether models perform clinically grounded, multi-step reasoning beyond final diagnoses.

chest x-ray evaluation benchmark structured diagnostic pipeline structured chest x-ray qa diagnostic reasoning intermediate reasoning steps grounded reasoning structured reasoning

Published: Oct. 15, 2025. Version: 1.0.0


Database Restricted Access

LATTE-CXR: Locally Aligned TexT and imagE, Explainable dataset for Chest X-Rays

Elham Ghelichkhan, Tolga Tasdizen

This dataset includes bounding box-statement pairs for chest X-ray images, derived from radiologists’ eye-tracking data (for explainability) and annotations, for local visual-language models.

eye-tracking chest x-ray dataset automatically generated dataset caption-guided object detection image captioning with region-level description grounded radiology report generation phrase grounding xai multi-modal learning local visual-language models localization

Published: Feb. 4, 2025. Version: 1.0.0


Database Credentialed Access

Eye Gaze Data for Chest X-rays

Alexandros Karargyris, Satyananda Kashyap, Ismini Lourentzou, Joy Wu, Matthew Tong, Arjun Sharma, Shafiq Abedin, David Beymer, Vandana Mukherjee, Elizabeth Krupinski, Mehdi Moradi

This dataset was a collected using an eye tracking system while a radiologist interpreted and read 1,083 public CXR images. The dataset contains the following aligned modalities: image, transcribed report text, dictation audio and eye gaze data.

convolutional network heatmap eye tracking explainability audio chest cxr chest x-ray radiology machine learning multimodal deep learning

Published: Sept. 12, 2020. Version: 1.0.0


Database Credentialed Access

VinDr-CXR: An open dataset of chest X-rays with radiologist annotations

Ha Quy Nguyen, Hieu Huy Pham, le tuan linh, Minh Dao, lam khanh

VinDr-CXR: An open dataset of chest X-rays with radiologist's annotations

lesion detection chest x-ray interpretation computer vision disease classification deep learning

Published: June 22, 2021. Version: 1.0.0


Database Open Access

CheXmask Database: a large-scale dataset of anatomical segmentation masks for chest x-ray images

Nicolas Gaggion, Candelaria Mosquera, Martina Aineseder, Lucas Mansilla, Diego Milone, Enzo Ferrante

CheXmask Database is a 657,566 uniformly annotated chest radiographs with segmentation masks. Images were segmented using HybridGNet, with automatic quality control indicated by RCA scores.

automatic quality assesment chest x-ray segmentation medical image segmentation

Published: Jan. 22, 2025. Version: 1.0.0


Database Credentialed Access

MIMIC-Ext-CXR-QBA: A Structured, Tagged, and Localized Visual Question Answering Dataset with Question-Box-Answer Triplets and Scene Graphs for Chest X-ray Images

Philip MĂĽller, Friederike Jungmann, Georgios Kaissis, Daniel Rueckert

We present a large-scale CXR VQA dataset derived from MIMIC-CXR with 42M QA pairs, featuring hierarchical answers, bounding boxes, and structured tags. We generated QA-pairs using LLM-based extraction from radiology reports and localization models.

chest x-rays vqa localization scene graphs

Published: July 22, 2025. Version: 1.0.0


Database Credentialed Access

Symile-MIMIC: a multimodal clinical dataset of chest X-rays, electrocardiograms, and blood labs from MIMIC-IV

Adriel Saporta, Aahlad Manas Puli, Mark Goldstein, Rajesh Ranganath

A multimodal clinical dataset consisting of CXRs, ECGs, and blood labs, designed to evaluate Symile, a simple contrastive loss that accommodates any number of modalities and allows any model to produce representations for each modality.

database cxr ecg chest x-ray mimic contrastive learning model multimodal electrocardiogram

Published: Jan. 28, 2025. Version: 1.0.0


Database Credentialed Access

Chest X-ray segmentation images based on MIMIC-CXR

Li-Ching Chen, Po-Chih Kuo, Ryan Wang, Judy Gichoya, Leo Anthony Celi

A chest x-rays segmentation dataset derived from MIMIC-CXR based on deep learning algorithm and human examination.

segmentation chest x-rays cxr

Published: Aug. 18, 2022. Version: 1.0.0