Robust Medical Image Prediction via Adaptive Reconstruction: Bridging the Gap in Low-Quality Data

Main Article Content

Prateek Singhal
Madan Singh

Abstract

Medical image prediction plays a very significant role in clinical decision-making and early detection and diagnosis of different diseases. However, the quality of medical images has a huge impact on the predictive models' accuracy. Poor-quality data usually occurs due to problems like noise, artifacts, and low resolution and poses a major challenge for reliable medical image prediction. Framework advances medical image analysis through three novel contributions firstly, A hybrid architecture combining wavelet-based denoising with deep learning (DL) enhancement (unlike existing single-approach methods). Secondly, Cross-modality robustness validated on low-quality CT/MRI/X-rays from real clinics (versus modality-specific solutions), and lastly, A closed-loop system where diagnostic predictions guide iterative image refinement (absent in current workflows). Benchmarks show 98.5% accuracy at 0.6ms latency, with 19% fewer false positives than cascaded approaches. This reduces the gap in low-quality data. Our method combines state-of-the-art image processing methods with machine learning algorithms to enhance the quality of medical images before feeding them into predictive models. The adaptive reconstruction-based model consists of using classic denoising techniques in images and DL-based approaches, selectively enhancing critical features and removing noise. It aims to provide qualities in image reconstruction suitable for prediction tasks by recovering lost or degraded information. Additionally, the work focuses on utilising robust machine learning algorithms to improve prediction accuracy on the reconstructed images. The framework was tested on various datasets and had significant improvements in predictive performance when compared to the traditional approaches using low-quality images directly. The findings indicated that adaptive reconstruction improves visual quality of medical images and improves the overall predictive model performance for clinical use cases. The proposed adaptive reconstruction model also represents a promising strategy for overcoming constraints posed by low-quality data and will improve the accuracy and reliability evidencing clinically relevant outcomes in medical imaging.

Article Details

How to Cite
Singhal, P., & Singh, M. (2026). Robust Medical Image Prediction via Adaptive Reconstruction: Bridging the Gap in Low-Quality Data. Journal of Informatics and Web Engineering, 5(1), 1–17. https://doi.org/10.33093/jiwe.2026.5.1.1
Section
Regular issue

References

L. Liu, A. I. Aviles-Rivero, and C. -B. Schonlieb, “Contrastive registration for unsupervised medical image segmentation,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 36, no. 1, pp. 147-159, Jan. 2025, doi: 10.1109/TNNLS.2023.3332003.

C. Xue, L. Yu, P. Chen, Q. Dou, and P. -A. Heng, “Robust medical image classification from noisy labeled data with global and local representation guided co-training,” in IEEE Transactions on Medical Imaging, vol. 41, no. 6, pp. 1371-1382, June 2022, doi: 10.1109/TMI.2021.3140140.

S. Sai, A. Gaur, R. Sai, V. Chamola, M. Guizani, and J. J. P. C. Rodrigues, “Generative AI for transformative healthcare: A comprehensive study of emerging models, applications, case studies, and limitations,” in IEEE Access, vol. 12, pp. 31078-31106, 2024, doi: 10.1109/ACCESS.2024.3367715.

S. Umirzakova, S. Ahmad, L. Khan, and T. Whangbo, “Medical image super-resolution for smart healthcare applications: a comprehensive survey”, Information Fusion, vol. 103, pp. 102075, 2024, doi: 10.1016/j.inffus.2023.102075.

J. Mehta, and A. Majumdar, “Rodeo: Robust de-aliasing autoencoder for real-time medical image reconstruction”, Pattern Recognition, vol. 63, pp. 499-510, 2017, doi: 10.1016/j.patcog.2016.09.022.

S. K. Zhou et al., “A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises,” in Proceedings of the IEEE, vol. 109, no. 5, pp. 820-838, May 2021, doi: 10.1109/JPROC.2021.3054390.

A. Liso et al., “A review of deep learning-based anomaly detection strategies in industry 4.0 focused on application fields, sensing equipment, and algorithms,” in IEEE Access, vol. 12, pp. 93911-93923, 2024, doi: 10.1109/ACCESS.2024.3424488.

Y. Xu, R. Wang, R. -W. Zhao, X. Xiao, and R. Feng, “Semi-supervised and class-imbalanced open set medical image recognition,” in IEEE Access, vol. 12, pp. 122852-122877, 2024, doi: 10.1109/ACCESS.2024.3442569.

R. Heckel, M. Jacob, A. Chaudhari, O. Perlman, and E. Shimron, “Deep learning for accelerated and robust mri reconstruction”, Magnetic Resonance Materials in Physics, Biology and Medicine, vol. 37, no. 3, pp. 335-368, 2024, doi: 10.1007/s10334-024-01173-8.

G. Muoka, Y. Ding, C. Ukwuoma, A. Mutale, C. Ejiyi, A. Mzee et al., “A comprehensive review and analysis of deep learning-based medical image adversarial attack and defense”, Mathematics, vol. 11, no. 20, pp. 4272, 2023, doi: 10.3390/math11204272.

D. Shen, G. Wu, and H. Suk, “Deep learning in medical image analysis”, Annual Review of Biomedical Engineering, vol. 19, no. 1, pp. 221-248, 2017, doi: 10.1146/annurev-bioeng-071516-044442.

M. Sarmah, A. Neelima, and H. Singh, “Survey of methods and principles in three-dimensional reconstruction from two-dimensional medical images”, Visual Computing for Industry, Biomedicine, and Art, vol. 6, no. 1, 2023, doi: 10.1186/s42492-023-00142-7.

Y. Heng, M. Yinghua, F. Khan, A. Khan, F. Ali, A. AlZubi et al., “Survey: Application and analysis of generative adversarial networks in medical images”, Artificial Intelligence Review, vol. 58, no. 2, 2024, doi: 10.1007/s10462-024-10992-z.

E. Sizikova, A. Badal, J. Delfino, M. Lago, B. Nelson, N. Saharkhiz et al., “Synthetic data in radiological imaging: Current state and future outlook”, BJR|Artificial Intelligence, vol. 1, no. 1, 2024, doi: 10.1093/bjrai/ubae007.

A. Marey, P. Arjmand, A. Alerab, M. Eslami, A. Saad, N. Sanchez et al., “Explainability, transparency and black box challenges of AI in radiology: Impact on patient care in cardiovascular radiology”, Egyptian Journal of Radiology and Nuclear Medicine, vol. 55, no. 1, 2024, doi: 10.1186/s43055-024-01356-2.

M. Abdelsamea, U. Zidan, Z. Senousy, M. Gaber, E. Rakha, and M. Ilyas, “A survey on Artificial Intelligence in histopathology image analysis”, WIREs Data Mining and Knowledge Discovery, vol. 12, no. 6, 2022, doi: 10.1002/widm.1474.

Q. Pu, Z. Xi, S. Yin, Z. Zhao, and L. Zhao, “Advantages of transformer and its application for medical image segmentation: A survey”, BioMedical Engineering OnLine, vol. 23, no. 1, 2024, doi: 10.1186/s12938-024-01212-4.

N. Gaggion, C. Mosquera, L. Mansilla, J. Saidman, M. Aineseder, D. Milone et al., “CheXmask: A large-scale dataset of anatomical segmentation masks for multi-center chest x-ray images”, Scientific Data, vol. 11, no. 1, 2024, doi: 10.1038/s41597-024-03358-1.

J. Wolterink, A. Mukhopadhyay, T. Leiner, T. Vogl, A. Bucher, and I. Isgum, “Generative adversarial networks: A primer for radiologists”, RadioGraphics, vol. 41, no. 3, pp. 840-857, 2021, doi: 10.1148/rg.2021200151.

A. Armoundas, S. Narayan, D. Arnett, K. Spector-Bagdady, D. Bennett, L. Celi et al., “Use of Artificial Intelligence in improving outcomes in heart disease: A scientific statement from the American heart association”, Circulation, vol. 149, no. 14, 2024, doi: 10.1161/cir.0000000000001201.

V. Gavini, and G. Lakshmi, “CT image denoising model using image segmentation for image quality enhancement for liver tumor detection using CNN", Traitement Du Signal, vol. 39, no. 5, pp. 1807-1814, 2022, doi: 10.18280/ts.390540.

O. Pianykh, G. Langs, M. Dewey, D. Enzmann, C. Herold, S. Schoenberg et al., “Continuous learning AI in radiology: Implementation principles and early applications”, Radiology, vol. 297, no. 1, pp. 6-14, 2020, doi: 10.1148/radiol.2020200038.

J. Zou, B. Gao, Y. Song, and J. Qin, “A review of deep learning-based deformable medical image registration”, Frontiers in Oncology, vol. 12, 2022, doi: 10.3389/fonc.2022.1047215.

M. Al-Ayyoub, A. Abu-Dalo, Y. Jararweh, M. Jarrah, and M. Sa'd, “A GPU-based implementations of the fuzzy C-means algorithms for medical image segmentation”, The Journal of Supercomputing, vol. 71, no. 8, pp. 3149-3162, 2015, doi: 10.1007/s11227-015-1431-y.

B. Li, N. Liu, J. Bai, J. Xu, Y. Tang, and Y. Liu, “MTMU: Multi-domain transformation based Mamba-UNet designed for unruptured intracranial aneurysm segmentation”, BMC Medical Imaging, vol. 25, no. 1, 2025, doi: 10.1186/s12880-025-01611-6.

M. N. Nagib, R. Pervez, A. A. Nova, H. R. Nabil, Z. Aung, and M. F. Mridha, “TuSegNet: A transformer-based and attention-enhanced architecture for brain tumor segmentation,” in IEEE Open Journal of the Computer Society, vol. 6, pp. 750-761, 2025, doi: 10.1109/OJCS.2025.3569758.

S. M. Kolekar et al., “AttRes-UNet: A dual-model approach for brain tumor segmentation,” in 4th International Conference on Advancement in Electronics & Communication Engineering (AECE), pp. 1391-1395, 2024, doi: 10.1109/AECE59804.2024.10456789.

N. Mu, Z. Lyu, M. Rezaeitaleshmahalleh, J. Tang, and J. Jiang, “An attention residual u-net with differential preprocessing and geometric postprocessing: Learning how to segment vasculature including intracranial aneurysms”, Medical Image Analysis, vol. 84, pp. 102697, 2023, doi: 10.1016/j.media.2022.102697.

W. Yuan, Y. Peng, Y. Guo, Y. Ren, and Q. Xue, “DCAU-Net: dense convolutional attention U-Net for segmentation of intracranial aneurysm images”, Visual Computing for Industry, Biomedicine, and Art, vol. 5, no. 1, 2022, doi: 10.1186/s42492-022-00105-4.

L. Hou, J. Zhang, L. Zhao, K. Meng, and X. Feng, “CTA image segmentation method for intracranial aneurysms based on MGLIA net”, Scientific Reports, vol. 15, no. 1, 2025, doi: 10.1038/s41598-025-95143-2.

C. Chituru, S. Ho, and I. Chai, “Diabetes risk prediction using Shapley additive explanations for feature engineering”, Journal of Informatics and Web Engineering, vol. 4, no. 2, pp. 18-35, 2025, doi: 10.33093/jiwe.2025.4.2.2.

C. Pabitha, B. Vanathi, K. Revathi, and S. Benila, “Enhancing citrus plant health through the application of image processing techniques for disease detection”, Journal of Informatics and Web Engineering, vol. 4, no. 2, pp. 53-63, 2025, doi: 10.33093/jiwe.2025.4.2.4.

S. Palaniappan, R. Logeswaran, and Y. Yong, “Machine learning model for assessing human well-being using brain wave activities”, Journal of Informatics and Web Engineering, vol. 4, no. 2, pp. 93-113, 2025, doi: 10.33093/jiwe.2025.4.2.7.

Similar Articles

You may also start an advanced similarity search for this article.