Volltext-Downloads (blau) und Frontdoor-Views (grau)

A Study on the Reliability of Visual XAI Methods for X-Ray Images

  • The YOLO series of object detection algorithms, including YOLOv4 and YOLOv5, have shown superior performance in various medical diagnostic tasks, surpassing human ability in some cases. However, their black-box nature has limited their adoption in medical applications that require trust and explainability of model decisions. To address this issue, visual explanations for AI models, known as visual XAI, have been proposed in the form of heatmaps that highlight regions in the input that contributed most to a particular decision. Gradient-based approaches, such as Grad-CAM, and non-gradient-based approaches, such as Eigen-CAM, are applicable to YOLO models and do not require new layer implementation. This paper evaluates the performance of Grad-CAM and Eigen-CAM on the VinDrCXR Chest X-ray Abnormalities Detection dataset and discusses the limitations of these methods for explaining model decisions to data scientists.

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Author:Jan StodtORCiDGND, Manav Madan, Christoph ReichORCiDGND, Luka Filipovic, Tomi Ilijas
URN:https://urn:nbn:de:bsz:fn1-opus4-97341
DOI:https://doi.org/10.3233/SHTI230416
ISSN:978-1-64368-401-7
Parent Title (English):Healthcare Transformation with Informatics and Artificial Intelligence : ICIMTH 2023, the 21st International Conference on Informatics, Management, and Technology in Healthcare, 1-3 July 2023, Athens, Greece
Publisher:IOS Press
Place of publication:Amsterdam
Document Type:Conference Proceeding
Language:English
Year of Completion:2023
Release Date:2023/07/13
Tag:Unreliability; Visual XAI; Yolo
First Page:32
Last Page:35
Open-Access-Status: Open Access 
 Gold 
Licence (German):License LogoCreative Commons - CC BY-NC - Namensnennung - Nicht kommerziell 4.0 International