Robust Lane Detection under Varying Lighting Conditions Using Adaptive Vision-Based Techniques
Main Article Content
Abstract
Reliable lane detection is crucial to autonomous driving but continues to be challenging with varying lighting conditions. Fluctuations in illuminations due to bright sunlight, shadows or low lighting at night can degrade the visual quality and adversely affect the accuracy of the lane detection results. This research proposes an adaptive approach for lane detection under different lighting scenarios. For daytime, a Region of Interest (ROI) masking and line averaging technique help in the stability and visibility of the lane markings. For nighttime conditions, a Probabilistic Hough Transform-based method improves lane detection in low-light environments. An evaluation tool has been developed to check if certain parameters correlate with day or night to enable dynamic selection of the most suitable detection technique. The proposed new method improves image preprocessing and combines several computer vision algorithms for accurate lane tracing. This new solution aids in shadow regions and faded marking areas, as well as improves precision for multi-lane roadways with varying lane widths. The approach adds accuracy for real-time lane recognition of autonomous vehicles on multi-lane highways with different degrees of illumination. This research also contributes toward the goal of improving safety and efficiency in autonomous driving by providing more effective methods of ensuring safe driving.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
All articles published in JIWE are licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License. Readers are allowed to
- Share — copy and redistribute the material in any medium or format under the following conditions:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use;
- NonCommercial — You may not use the material for commercial purposes;
- NoDerivatives — If you remix, transform, or build upon the material, you may not distribute the modified material.
References
N. Sukumar, and P. Sumathi, “An Improved Lane Detection and Lane Departure Warning Framework for ADAS,” IEEE Trans. Consum. Electron., vol. 70, no. 2, pp. 4793–4803, May 2024, doi: 10.1109/TCE.2024.3387708.
M. Qiu, L. Christopher, S.Y.-P. Chien, and Y. Chen, “Intelligent Highway Adaptive Lane Learning System in Multiple ROIs of Surveillance Camera Video,” IEEE Trans. Intell. Transp. Syst., vol. 25, no. 8, pp. 8591–8601, Aug. 2024, doi: 10.1109/TITS.2024.3358732.
J. Bi et al., “Lane Detection for Autonomous Driving: Comprehensive Reviews, Current Challenges, and Future Predictions,” IEEE Trans. Intell. Transp. Syst., vol. 26, no. 5, pp. 5710–5746, May 2025, doi: 10.1109/TITS.2024.3524603.
N. Pourjafari, A. Ghafari, and A. Ghaffari, “Navigating Unsignalized Intersections: A Predictive Approach for Safe and Cautious Autonomous Driving,” IEEE Trans. Intell. Veh., vol. 9, no. 1, pp. 269–278, Jan. 2024, doi: 10.1109/TIV.2023.3321275.
D. Annunziata, D. Chiaro, P. Qi, and F. Piccialli, “On the Road to AIoT: A Framework for Vehicle Road Cooperation,” IEEE Internet Things J., vol. 12, no. 5, pp. 5783–5791, Mar. 2025, doi: 10.1109/JIOT.2024.3488855.
J. Liu et al., “Semantic Traffic Law Adaptive Decision-Making for Self-Driving Vehicles,” IEEE Trans. Intell. Transp. Syst., vol. 24, no. 12, pp. 14858–14872, Dec. 2023, doi: 10.1109/TITS.2023.3294579.
X. Luo, Y. Huang, J. Cui, and K. Zheng, “Deep learning-based lane detection for intelligent driving: A comprehensive survey of methods, datasets, challenges and outlooks,” Neurocomputing, vol. 650, p. 130795, Oct. 2025, doi: 10.1016/j.neucom.2025.130795.
F. You et al., “Attention based network for real-time road drivable area, lane line detection and scene identification,” Eng. Appl. Artif. Intell., vol. 160, p. 111781, Nov. 2025, doi: 10.1016/j.engappai.2025.111781.
I.-C. Sang, and W.R. Norris, “Improved generalizability of CNN based lane detection in challenging weather using adaptive preprocessing parameter tuning,” Expert Syst. Appl., vol. 275, p. 127055, May 2025, doi: 10.1016/j.eswa.2025.127055.
L. Sun, H. Zhu, and W. Qin, “SP-Det: Anchor-based lane detection network with structural prior perception,” Pattern Recognit. Lett., vol. 188, pp. 60–66, Feb. 2025, doi: 10.1016/j.patrec.2024.11.030.
Z. Sun, “Vision Based Lane Detection for Self-Driving Car,” in 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications( AEECA), Aug. 2020, pp. 635–638. doi: 10.1109/AEECA49918.2020.9213624.
Q. Zou, H. Jiang, Q. Dai, Y. Yue, L. Chen, and Q. Wang, “Robust Lane Detection From Continuous Driving Scenes Using Deep Neural Networks,” IEEE Trans. Veh. Technol., vol. 69, no. 1, pp. 41–54, Jan. 2020, doi: 10.1109/TVT.2019.2949603.
Z. Wang, W. Ren, and Q. Qiu, “LaneNet: Real-Time Lane Detection Networks for Autonomous Driving,” Jul. 04, 2018, arXiv: arXiv:1807.01726. doi: 10.48550/arXiv.1807.01726.
J.J. Ng, K.O.M. Goh, and C. Tee, “Traffic Impact Assessment System using Yolov5 and ByteTrack,” J. Inform. Web Eng., vol. 2, no. 2, Art. no. 2, Sep. 2023, doi: 10.33093/jiwe.2023.2.2.13.
M. Rao, L. Paulino, V. Robila, I. Li, M. Zhu, and W. Wang, “Training-Free Lane Tracking for 1/10th Scale Autonomous Vehicle Using Inverse Perspective Mapping and Probabilistic Hough Transforms,” in 2022 IEEE Integrated STEM Education Conference (ISEC), Mar. 2022, pp. 406–411. doi: 10.1109/ISEC54952.2022.10025098.
J.H. Ang, and T.S. Min, “A Review on Sensor Technologies and Control Methods for Mobile Robot with Obstacle Detection System,” Int. J. Robot. Autom. Sci., vol. 6, no. 1, Art. no. 1, Apr. 2024, doi: 10.33093/ijoras.2024.6.1.11.
S. Anbalagan, P. Srividya, B. Thilaksurya, S.G. Senthivel, G. Suganeshwari, and G. Raja, “Vision-Based Ingenious Lane Departure Warning System for Autonomous Vehicles,” Sustainability, vol. 15, no. 4, Art. no. 4, Jan. 2023, doi: 10.3390/su15043535.
M. Meuter, S. Muller-Schneiders, A. Mika, S. Hold, C. Nunn, and A. Kummert, “A novel approach to lane detection and tracking,” in 2009 12th International IEEE Conference on Intelligent Transportation Systems, Oct. 2009, pp. 1–6. doi: 10.1109/ITSC.2009.5309855.
Z.Y. Lim, and K.S. Sim, “Multi-Color Code with High Data Capacity,” Int. J. Robot. Autom. Sci., vol. 4, pp. 35–45, Jul. 2022, doi: 10.33093/ijoras.2022.4.6.
R.N. Grant, R.D. Green, and A.J. Clark, “HLS Distorted colour model for enhanced colour image segmentation,” in 2008 23rd International Conference Image and Vision Computing New Zealand, Nov. 2008, pp. 1–6. doi: 10.1109/IVCNZ.2008.4762139.