XAI -Driven Explainability for Cardiovascular Diseases Prediction

Main Article Content

Jacqueline Dike
Jarutas Andritsch

Abstract

The adoption of artificial intelligence (AI) in cardiovascular disease prediction has significantly improved risk stratification, offering new avenues for early diagnosis and preventive care. With the growing availability of electronic health records and structured clinical datasets, machine learning (ML) and deep learning (DL) models have demonstrated strong predictive capabilities. However, despite their performance, its adoption in healthcare is often constrained by the lack of transparency and interpretability in many ML and DL models. This lack of explainability undermines clinical trust and raises ethical concerns. In high-stakes domains such as CVD prediction, clinicians require not only accurate outputs but also clear explanations of how those predictions are derived. This paper presents a comparative evaluation of explainable artificial intelligence (XAI) techniques applied to both conventional ML models such as Logistic Regression, Support Vector Machine, Decision Tree, and Random Forest and DL architectures including AutoInt, FT-Transformer, and Category Embedding. Using the Framingham Heart Study dataset, this study integrates SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) to assess model interpretability and feature relevance. Results show that conventional models offer superior explainability with comparable predictive accuracy, while DL models, although slightly less interpretable, demonstrate potential with advanced XAI techniques. The findings advocate hybrid approaches that balance accuracy and interpretability, supporting ethical and practical AI deployment in healthcare.

Article Details

How to Cite
Dike, J., & Andritsch, J. (2026). XAI -Driven Explainability for Cardiovascular Diseases Prediction. Journal of Informatics and Web Engineering, 5(1), 167–176. https://doi.org/10.33093/jiwe.2026.5.1.10
Section
Regular issue

References

WHO, “World Health Statistics 2021: Monitoring Health for the SDGs, Sustainable Development Goals." World Health Organization. Accessed: Jun. 23, 2025. [Online]. Available: https://digitallibrary.un.org/record/3935247?ln=en&v=pdf.

A. I. F. Poon, and J. J. Y. Sung, “Opening the black box of AI-medicine”, Journal of Gastroenterology and Hepatology, vol. 36, no. 3, pp. 581-584, 2021, doi: 10.1111/jgh.15384.

E. Marcus, and J. Teuwen, “Artificial Intelligence and explanation: How, why, and when to explain black boxes”, European Journal of Radiology, vol. 173, pp. 111393, 2024, doi: 10.1016/j.ejrad.2024.111393.

A. Sethi, S. Dharmavaram, and S. K. Somasundaram, “Explainable Artificial Intelligence (XAI) approach to heart disease prediction," 2024 3rd International Conference on Artificial Intelligence for Internet of Things (AIIoT), Vellore, India, pp. 1-6, 2024, doi: 10.1109/AIIoT58432.2024.10574635.

A. Kilic, “Artificial Intelligence and machine learning in cardiovascular health care”, The Annals of Thoracic Surgery, vol. 109, no. 5, pp. 1323-1329, 2020, doi: 10.1016/j.athoracsur.2019.09.042.

A. S. M. Faizal, T. M. Thevarajah, S. M. Khor, and S. Chang, “A review of risk prediction models in cardiovascular disease: Conventional approach vs. Artificial Intelligent approach”, Computer Methods and Programs in Biomedicine, vol. 207, pp. 106190, 2021, doi: 10.1016/j.cmpb.2021.106190.

N. Joshi, and T. Dave, “Improved accuracy for heart disease diagnosis using machine learning techniques,” Journal of Informatics and Web Engineering, vol. 4, no. 1, 2025, doi: 10.33093/jiwe.2025.4.1.4.

C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead", Nature Machine Intelligence, vol. 1, no. 5, pp. 206-215, 2019, doi: 10.1038/s42256-019-0048-x.

V. Hassija, V. Chamola, A. Mahapatra, A. Singal, D. Goel, K. Huang et al., “Interpreting black-box models: A review on Explainable Artificial Intelligence”, Cognitive Computation, vol. 16, no. 1, pp. 45-74, 2023, doi: 10.1007/s12559-023-10179-8.

A. Adadi, and M. Berrada, “Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI),” IEEE Access, vol. 6, 2018, doi: 10.1109/ACCESS.2018.2870052.

C. M. Chituru, S. B. Ho, and I. Chai, “Diabetes risk prediction using shapley additive explanations for feature engineering,” Journal of Informatics and Web Engineering, vol. 4, no.2, 2025, doi: 10.33093/jiwe.2025.4.2.2.

Z. Sadeghi, R. Alizadehsani, M. A. CIFCI, S. Kausar, R. Rehman, P. Mahanta et al., “A review of Explainable Artificial Intelligence in healthcare,” Computers and Electrical Engineering, vol. 118, 2024, doi: 10.1016/j.compeleceng.2024.109370.

J. You, Y. Guo, J.-J. Kang, H.-F. Wang, M. Yang, J.-F. Feng, J.-T. Yu et al., “Development of machine learning-based models to predict 10-year risk of cardiovascular disease: A prospective cohort study,” Stroke and Vascular Neurology, vol. 8, no. 6, 2023, doi: 10.1136/svn-2023-002332.

Z. Li, R. Li, Y. Zhou, L. Rasmy, D. Zhi, P. Zhu et al., “Prediction of brain metastases development in patients with lung cancer by Explainable Artificial Intelligence from electronic health records,” JCO Clinical Cancer Informatics, vol. 7, 2023, doi: 10.1200/CCI.22.00141.

N. G. Rezk, S. Alshathri, A. Sayed, E. E.-D. Hemdan, and H. El-Behery, “XAI-Augmented voting ensemble models for heart disease prediction: A SHAP and LIME-based approach,” Bioengineering, vol. 11, no. 10, 2024, doi: 10.3390/bioengineering11101016.

G. Petmezas, V. E. Papageorgiou, V. Vassilikos, E. Pagourelias, D. Tachmatzidis, G. Tsaklidis et al., “Enhanced heart failure mortality prediction through model-independent hybrid feature selection and explainable machine learning,” Journal of Biomedical Informatics, vol. 163, 2025, doi: 10.1016/j.jbi.2025.104800.

A. M. Alaa, T. Bolton, E. D. Angelantonio, J. H. F. Rudd, and M. v. d. Schaar, “Cardiovascular disease risk prediction using automated machine learning: A prospective study of 423,604 UK biobank participants,” PloS one, vol. 14, no. 5, 2019, doi: 10.1371/journal.pone.0213653.

M. M. Hossain, M. S. Ali, M. M. Ahmed, M. R. H. Rakib, M. A. Kona, S. Afrin et al., “Cardiovascular disease identification using a hybrid CNN-LSTM model with Explainable AI,” Informatics in Medicine Unlocked, vol. 42, 2023, doi: 10.1016/j.imu.2023.101370.

W. J. She, P. Siriaraya, H. Iwakoshi, N. Kuwahara, and K. Senoo, “An Explainable AI Application (AF’fective) to support monitoring of patients with atrial fibrillation after catheter ablation: Qualitative focus group, design session, and interview study,” JMIR Human Factors, vol. 12, 2025, doi: 10.2196/65923.

N. Kahouadji, “On the generalizability of machine learning classification algorithms and their application to the Framingham heart study,” Information, vol. 15, no. 5, 2024, doi: 10.3390/info15050252.

D. Saraswat, P. Bhattacharya, A. Verma, V. K. Prasad, S. Tanwar, G. Sharma et al., “Explainable AI for healthcare 5.0: opportunities and challenges,” IEEE Access, vol. 10, 2022, doi: 10.1109/ACCESS.2022.3197671.

M. Ghassemi, L. Oakden-Rayner, and A. L. Beam, “The false hope of current approaches to Explainable Artificial Intelligence in health care,” The Lancet Digital Health, vol. 3, no. 11, 2021, doi: 10.1016/S2589-7500(21)00208-9.

A. Holzinger, G. Langs, H. Denk, K. Zatloukal, and H. Muller, “Causability and Explainability of Artificial Intelligence in medicine,” Wiley interdisciplinary reviews: data mining and knowledge discovery, vol. 9, no. 4, 2019, doi: 10.1002/widm.1312.

A. Abusitta, M. Q. Li, and B. C. M. Fung, “Survey on Explainable AI: Techniques, challenges and open issues,” Expert Systems with Applications, vol. 255, 2024, doi: 10.1016/j.eswa.2024.124710.

T. Hulsen, “Explainable Artificial Intelligence (XAI): Concepts and challenges in healthcare," AI, vol. 4, no. 3, 2023, doi: 10.3390/ai4030034.