Lung Tumor Segmentation in Medical Imaging Using U-NET
Main Article Content
Abstract
Tumors are a deadly condition often triggered by a range of abnormal modifications and genetic abnormalities. Early tumor diagnosis is essential due to the highly concerned nature of the disease. Early detection and treatment of tumors can significantly reduce mortality rates. This paper presents a model for tumor segmentation in medical imaging that uses the U-NET architecture to increase precision. The model’s encoding and decoding processes have been applied with skip connections to boost performance while simplifying model training. Images were cropped around the lower abdominal regions, and all images used in the study were then resized to 256*256 pixels for standardization. The proposed model deals with the class imbalance using data augmentation and oversampling. The experiments achieved a dice score of 0.853±0.02; F-score of 0.905±0.02; and a sensitivity of 0.897±0.02, compared with various existing models. As part of the model’s application, the pytorch-lightning library is used to successfully identify lung cancer scans, thereby proving to be a precise and efficient method of tumor identification. Accordingly, the study emphasizes the accuracy and speed of the applied model as a useful instrument for the earliest detection of tumors. The proposed approach helps to achieve more relevant and accurate segmentation and thus provides enhancements in medical images analysis if such challenges as an imbalance data set are well handled.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
All articles published in JIWE are licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License. Readers are allowed to
- Share — copy and redistribute the material in any medium or format under the following conditions:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use;
- NonCommercial — You may not use the material for commercial purposes;
- NoDerivatives — If you remix, transform, or build upon the material, you may not distribute the modified material.
References
H. F. Al-Yasriy, M. S. AL-Husieny, F. Y. Mohsen, E. A. Khalil, and Z. S. Hassan, “Diagnosis of Lung Cancer Based on CT Scans Using CNN,” IOP Conference Series: Materials Science and Engineering, vol. 928, no. 2, p. 022035, Nov. 2020, doi: 10.1088/1757-899X/928/2/022035.
M. G. Mokwena, C. A. Kruger, M.-T. Ivan, and A. Heidi, “A review of nanoparticle photosensitizer drug delivery uptake systems for photodynamic treatment of lung cancer,” Photodiagnosis and Photodynamic Therapy, vol. 22, pp. 147–154, Jun. 2018, doi: 10.1016/j.pdpdt.2018.03.006.
K. S. Pradhan, P. Chawla, and R. Tiwari, “HRDEL: High ranking deep ensemble learning-based lung cancer diagnosis model,” Expert Systems with Applications, vol. 213, p. 118956, Mar. 2023, doi: 10.1016/j.eswa.2022.118956.
J. Zhang et al., “Establishment of the prognostic index of lung squamous cell carcinoma based on immunogenomic landscape analysis,” Cancer Cell International, vol. 20, no. 1, p. 330, Dec. 2020, doi: 10.1186/s12935-020-01429-y.
P. P. R. Filho et al., “Automated recognition of lung diseases in CT images based on the optimum-path forest classifier,” Neural Computing and Applications, vol. 31, no. S2, pp. 901–914, Feb. 2019, doi: 10.1007/s00521-017-3048-y.
M. Ragab, I. Katib, S. A. Sharaf, F. Y. Assiri, D. Hamed, and A. A.-M. Al-Ghamdi, “Self-Upgraded Cat Mouse Optimizer With Machine Learning Driven Lung Cancer Classification on Computed Tomography Imaging,” IEEE Access, vol. 11, pp. 107972–107981, 2023, doi: 10.1109/ACCESS.2023.3313508.
H. Sung et al., “Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries,” CA: A Cancer Journal for Clinicians, vol. 71, no. 3, pp. 209–249, May 2021, doi: 10.3322/caac.21660.
S. Maurya, S. Tiwari, M. C. Mothukuri, C. M. Tangeda, R. N. S. Nandigam, and D. C. Addagiri, “A review on recent developments in cancer detection using Machine Learning and Deep Learning models,” Biomedical Signal Processing and Control, vol. 80, p. 104398, Feb. 2023, doi: 10.1016/j.bspc.2022.104398.
J.-J. Sonke and J. Belderbos, “Adaptive Radiotherapy for Lung Cancer,” Seminars in Radiation Oncology, vol. 20, no. 2, pp. 94–106, Apr. 2010, doi: 10.1016/j.semradonc.2009.11.003.
G. Sharp et al., “Vision 20/20: Perspectives on automated image segmentation for radiotherapy,” Medical Physics, vol. 41, no. 5, p. 050902, Apr. 2014, doi: 10.1118/1.4871620.
C. C. Chai, W. H. Khoh, Y. H. Pang, and H. Y. Yap, “A Lung Cancer Detection with Pre-Trained CNN Models,” Journal of Informatics and Web Engineering, vol. 3, no. 1, pp. 41–54, Feb. 2024, doi: 10.33093/jiwe.2024.3.1.3.
. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” 2015, pp. 234–241. doi: 10.1007/978-3-319-24574-4_28.
F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature Methods, vol. 18, no. 2, pp. 203–211, Feb. 2021, doi: 10.1038/s41592-020-01008-z.
P. M. Shakeel, M. A. Burhanuddin, and M. I. Desa, “Lung cancer detection from CT image using improved profuse clustering and deep learning instantaneously trained neural networks,” Measurement, vol. 145, pp. 702–712, Oct. 2019, doi: 10.1016/j.measurement.2019.05.027.
H. H. Popper, “Progression and metastasis of lung cancer,” Cancer and Metastasis Reviews, vol. 35, no. 1, pp. 75–91, Mar. 2016, doi: 10.1007/s10555-016-9618-0.
K. Cetin, C. F. Christiansen, J. B. Jacobsen, M. Nørgaard, and H. T. Sørensen, “Bone metastasis, skeletal-related events, and mortality in lung cancer patients: A Danish population-based cohort study,” Lung Cancer, vol. 86, no. 2, pp. 247–254, Nov. 2014, doi: 10.1016/j.lungcan.2014.08.022.
K. L. Lew, C. Y. Kew, K. S. Sim, and S. C. Tan, “Adaptive Gaussian Wiener Filter for CT-Scan Images with Gaussian Noise Variance,” Journal of Informatics and Web Engineering, vol. 3, no. 1, pp. 169–181, Feb. 2024, doi: 10.33093/jiwe.2024.3.1.11.
Q. Xu, M. Li, M. Li, and S. Liu, “Energy Spectrum CT Image Detection Based Dimensionality Reduction with Phase Congruency,” Journal of Medical Systems, vol. 42, no. 3, p. 49, Mar. 2018, doi: 10.1007/s10916-018-0904-y.
G. Wang, D. Zhao, Z. Ling, H. Wang, S. Yu, and J. Zhang, “Evaluation of the best single energy scanning in energy spectrum CT in lower extremity arteriography,” Experimental and Therapeutic Medicine, Jun. 2019, doi: 10.3892/etm.2019.7666.
C. H. McCollough, S. Leng, L. Yu, and J. G. Fletcher, “Dual- and Multi-Energy CT: Principles, Technical Approaches, and Clinical Applications,” Radiology, vol. 276, no. 3, pp. 637–653, Sep. 2015, doi: 10.1148/radiol.2015142631.
J.-H. Lee, D.-H. Kim, S.-N. Jeong, and S.-H. Choi, “Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm,” Journal of Dentistry, vol. 77, pp. 106–111, Oct. 2018, doi: 10.1016/j.jdent.2018.07.015.
J. Jayapradha, S. Sourav, D. Singh, and M. U. Devi, “Detection of brain tumours using deep learning model,” AIP conference proceedings, vol. 3075, pp. 020209–020209, Jan. 2024, doi: https://doi.org/10.1063/5.0217153.
M. Sia, K.-W. Ng, S.-C. Haw, and J. Jayaram, “Chronic disease prediction chatbot using deep learning and machine learning algorithms,” Bulletin of Electrical Engineering and Informatics, vol. 14, no. 1, pp. 742–751, Nov. 2024, doi: 10.11591/eei.v14i1.8462.
X. Lu, Y. A. Nanehkaran, and M. Karimi Fard, “A Method for Optimal Detection of Lung Cancer Based on Deep Learning Optimized by Marine Predators Algorithm,” Computational Intelligence and Neuroscience, vol. 2021, no. 1, Jan. 2021, doi: 10.1155/2021/3694723.