Radiology Report Generation Using Deep Learning and Web-Based Deployment for Chest X-Ray Analysis

Main Article Content

David Agbolade
Peyman Heydarian
Shakeel Ahmad

Abstract

The huge rise in the number of medical images has caused a major problem in radiology departments. Radiologists are now working harder than ever, which affects the quality of their diagnoses and patient care. It takes 15 to 30 minutes to write a manual radiological report for each case, and different people may see things differently. Modern departments process over 230 cases a week, which causes long delays in diagnosis. Automated report generation systems that are already in use have a lot of problems, such as not being able to be interpreted clinically, not having enough Digital Imaging and Communications in Medicine (DICOM) integration, and not having the right deployment architectures. This makes it hard for medical artificial intelligence to be widely used in clinical settings. This work shows a new automated web-based system for making radiologist reports from chest X-ray pictures using cutting-edge deep learning methods.  We suggest using a CheXNet-based convolutional neural network (CNN) with attention mechanisms and Gated Recurrent Units (GRU) to make diagnostic summaries that are useful in a clinical setting.  The system is fully compatible with DICOM and uses Streamlit, Docker, and AWS cloud services to make clinical workflows operate together smoothly. The Indiana University Chest X-ray dataset, which has 7,491 pictures and 3,955 reports, was used for training and testing.  The system did much better than the best methods available, with BLEU-1, BLEU-2, BLEU-3, and BLEU-4 scores of 0.685, 0.595, 0.533, and 0.482, respectively, as well as a METEOR score of 0.392 and a ROUGE-L score of 0.718.The deployed web application provides real-time report generation with attention heatmap visualisations enabling clinicians to understand model decision-making processes. This interpretability feature addresses critical trust barriers in clinical AI adoption whilst supporting radiologists with diagnostic assistance for routine chest imaging cases.

Article Details

How to Cite
Agbolade, D., Heydarian, P., & Ahmad, S. (2026). Radiology Report Generation Using Deep Learning and Web-Based Deployment for Chest X-Ray Analysis. Journal of Informatics and Web Engineering, 5(1), 37–51. https://doi.org/10.33093/jiwe.2026.5.1.3
Section
Regular issue

References

S. K. Zhou, H. Greenspan, and D. Shen, "Deep learning for medical image analysis," J. Pathol. Inform., vol. 9, p. 7, 2018, doi: 10.4103/jpi.jpi_27_18.

N. Habib, and M. Rahman, “Diagnosis of corona diseases from associated genes and X-ray images using machine learning algorithms and deep CNN", Informatics in Medicine Unlocked, vol. 24, pp. 100621, 2021, doi: 10.1016/j.imu.2021.100621.

O. Alfarghaly, R. Khaled, A. ElKorany, M. Helal, and A. Fahmy, “Automated radiology report generation using conditioned transformers”, Informatics in Medicine Unlocked, vol. 24, pp. 100557, 2021, doi: 10.1016/j.imu.2021.100557.

S. Raminedi, S. Shridevi, and D. Won, “Multi-modal transformer architecture for medical image analysis and automated report generation”, Scientific Reports, vol. 14, no. 1, 2024, doi: 10.1038/s41598-024-69981-5.

Y. Zhang, D. Merck, E. Tsai, C. Manning, and C. Langlotz, “Optimizing the factual correctness of a summary: A study of summarizing radiology reports”, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, doi: 10.18653/v1/2020.acl-main.458.

P. Sloan, P. Clatworthy, E. Simpson and M. Mirmehdi, “Automated radiology report generation: A review of recent advances,” in IEEE Reviews in Biomedical Engineering, vol. 18, pp. 368-387, 2025, doi: 10.1109/RBME.2024.3408456.

G. Litjens et al., “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, pp. 60-88, Dec. 2017, doi: 10.1016/j.media.2017.07.005.

K. Xu et al., “Show, attend and tell: Neural image caption generation with visual attention,” Proceedings of the 32nd International Conference on Machine Learning, Lille, France, Jul. 2015, pp. 2048-2057, doi: 10.48550/arXiv.1502.03044.

D. Parres, A. Albiol, and R. Paredes, “Improving radiology report generation quality and diversity through reinforcement learning and text augmentation”, Bioengineering, vol. 11, no. 4, pp. 351, 2024, doi: 10.3390/bioengineering11040351.

K. Papineni, S. Roukos, T. Ward, and W. Zhu, “Bleu: A method for automatic evaluation of machine translation”, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics - ACL '02, pp. 311, 2001, doi: 10.3115/1073083.1073135.

S. Banerjee, and A. Lavie, “METEOR: An automatic metric for MT evaluation with improved correlation with human judgments,” in Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, Ann Arbor, MI, USA, Jun. 2005, pp. 65–72. [Online]. Available: https://aclanthology.org/W05-0909/

C.-Y. Lin, “ROUGE: A package for automatic evaluation of summaries,” in Text Summarization Branches Out, Barcelona, Spain, Jul. 2004, pp. 74–81. [Online]. Available: https://aclanthology.org/W04-1013/

Singh, and S. Singh, “ChestX-Transcribe: A multimodal transformer for automated radiology report generation from chest C-rays”, Frontiers in Digital Health, vol. 7, 2025, doi: 10.3389/fdgth.2025.1535168.

T. Jorg et al., “A novel reporting workflow for automated integration of artificial intelligence results into structured radiology reports”, Insights into Imaging, vol. 15, no. 1, 2024, doi: 10.1186/s13244-024-01660-5.

T. Nakaura et al., “Preliminary assessment of automated radiology report generation with generative pre-trained transformers: Comparing results to radiologist-generated reports”, Japanese Journal of Radiology, vol. 42, no. 2, pp. 190-200, 2023 doi: 10.1007/s11604-023-01487-y.

W. Akbar, M. Haq, A. Abdullah, S. Daudpota, A. Imran, and M. Ullah, “Automated report generation: A GRU based method for chest X-rays”, 2023 4th International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pp. 1-6, 2023, doi: 10.1109/icomet57998.2023.10099311.

C. Y. Seek, S. Y. Ooi, Y. H. Pang, S. L. Lew, and X. Y. Heng, “Elderly and smartphone apps: Case study with lightweight MySejahtera”, Journal of Informatics and Web Engineering, vol. 2, no. 1, pp. 13–24, Mar. 2023, doi: 10.33093/jiwe.2023.2.1.2.

S. M. K. Loh, and Z. Che embi, “A systematic review on non-functional requirements documentation in Agile methodology,” Journal of Informatics and Web Engineering, vol. 1, no. 2, pp. 19–29, Sep. 2022, doi: 10.33093/jiwe.2022.1.2.2.

G. Magalhaes, R. Santos, L. Vogado, A. Paiva, and P. Neto, “Xrayswingen: Automatic medical reporting for X-ray exams with multimodal model”, Heliyon, vol. 10, no. 7, pp. e27516, 2024, doi: 10.1016/j.heliyon.2024.e27516.

S. Niksaz, and F. Ghasemian, “Improving chest X-ray report generation by leveraging text of similar images”, SSRN Electronic Journal, 2022, doi: 10.2139/ssrn.4211036.

Similar Articles

You may also start an advanced similarity search for this article.