YCL Based Smart Glasses for Visually Impaired
Main Article Content
Abstract
A key component of assistive technology that helps people with visual impairments access text in their daily lives is a text reader. An image-based text recognition tool for the blind is presented in this article. It takes pictures of the user's surroundings using a dual-camera module. For improving character detection and recognition in these developed smart glasses for the blind and the visually impaired, a hybrid algorithm called the YCL Character Recognition Algorithm that combines You Only Look Once (YOLO), Convolutional Recurrent Neural Networks (CRNN), and Long Short-Term Memory (LSTM) to balance out the drawbacks of each algorithm by learning from its advantages. The suggested YOLO-v8 model is used for real-time text object detection, CRNN is used to extract character features, and LSTM is used to enhance sequential character prediction. An audio output signal is given to the user by the image processing software after the visual data has been processed. These smart glasses have the benefit of being able to view characters from both close and far distances. A specially acquired dataset was used to evaluate the suggested YCL technique, which shows notable speed and accuracy gains over the traditional homogenous algorithms. The suggested method is successful in identifying words and characters and in delivering audio output for the blind and visually impaired, based on users’ surveys and experimental data.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
] M. J. Burton, J. Ramke, A. P. Marques, R. R. A. Bourne, N. Congdon and I. Jones, “The Lancet Global Health Commission on Global Eye Health: vision beyond 2020,” The Lancet Glob. Health, vol. 9, no. 4, pp. e489-e551, 2021.
A. A. Diaz Toro, S. E. Campaña Bastidas and E. F. Caicedo Bravo, "Methodology to Build a Wearable System for Assisting Blind People in Purposeful Navigation," in 3rd Int. Congr. Informat. and Commun. Technol., San Jose, CA, USA, pp. 205-212, 2020.
S. Nazim, S. Firdous, V. K. Shukla and S. R. Pillai, "Smart Glasses: A Visual Assistant for the Blind," in 2022 Int. Mobil. and Embed. Technol. Conf., Noida, India, pp. 621 -626, 2022.
D. D. Brilli, E. Georgaras, S. Tsilivaki, N. Melanitis and K.Nikita, "AIris: AI-powered Wearable Assistive Device for the Visually Impaired," in IEEE 10th RAS/EMBS Int. Conf. Biomed. Robot. and Biomechatron., Heidelberg, Germany, 2024, pp. 1236-1241, 2024.
V. Moram, S. Zahruddin and S. Kumar, "Multifunctional Assistive Smart Glasses for Visually Impaired," SN Comput. Sci., vol. 6, pp. 173, 2025.
"Wearable AI System Helps Blind People Navigate," Nature Publishing Group, 2025. [Available Online on 14 April 2025] https://techxplore.com/news/2025-04-wearable-ai-people.html.
Envision, “Envision Glasses: Smart Glasses for the Blind and Visually Impaired,” [Available Online] https://www.letsenvision.com/glasses/home.
Noorcam, “NoorCam MyEye,” 2025. [Available Online: 15 June 2025] https://www.noorcam.com/en-ae/noorcam-myeye?srsltid=AfmBOorhamlLIvgsO1iu2CbC11YD8LWDbc2riEhTZ5IwTM8OX4r-kYpi.
P. Ramya, R. Ramya and U. Ponvizhi, “Optical Character Recognition using Python,” in Int. J. Trend in Sci. Res. and Develop., vol. 5, no. 3, pp. 1052-1054, 2021.
S. Reshmi, R. D. Salagara nd S. S. Veni, “Text Detection in Image Based on The Morphology Method,” Int. J. Eng. Develop. and Res., vol. 7, no. 3, pp: 468-471, 2019.
P. Swaroop and N. Sharma. “An Overview of Various Template Matching Methodologies in Image Processing”, Int. J. Comput. Appl., vol. 153, pp. 8-14, 2016.
F. Ashraf and V. A. Nurjahan, “Connected Component Clustering Based Text Detection with Structure Based Partition and Grouping,” J. Comput. Eng., vol. 16, no. 5, pp. 50–56, 2014.
T. Kumar, C. Khosla and K. Vashistha, “Character Recognition Techniques using Machine Learning: A Comprehensive Study,” in Int. Conf. Appl. Intellig. and Sustain. Comput., pp. 1-6, 2023.
G. D. Jennifer and E. B. Carla “Feature Selection for Unsupervised Learning,” J. Mach.. Learn. Res., vol. 5, pp. 845–889, 2004.
J. Gui, T. Chen, J. Zhang, Q. Cao, Z. Sun, H. Luo and D. Tao, “A Survey on Self-Supervised Learning: Algorithms, Applications, and Future Trends,” IEEE Trans. Patt. Analy. and Mach. Intellig., vol. 14, no. 8, pp. 1-23, 2015.
X. Peng, Z. Huang, K. Chen, J. Guo and W. Qiu, "RLST: A Reinforcement Learning Approach to Scene Text Detection Refinement," in 2020 25th Int. Conf. Patt. Recogn., Milan, Italy, pp. 1521-1528, 2021.
R. Kajale, S. Das and P. Medhekar, "Supervised Machine Learning in Intelligent Character Recognition of Handwritten and Printed Nameplate," in 2017 Int. Conf. Adv. Comput., Commun. and Contr., Mumbai, India, pp. 1-5, 2017.
A. Rowlands, "Physics of Digital Photography," IOP Publishing, 2017.
Raspberry Pi, [Available Online on 16 June 2025] https://www.raspberrypi.com/products/raspberry-pi-zero-2-w/.
J. Redmon, S. Divvala, R. Girshick and A. Farhadi “You Only Look Once: Unified, Real-time Object Detection,” in 2016 IEEE Conf. Comput. Vis. and Patt. Recogn., Las Vegas, NV, USA, pp. 779-788, 2016.
O. Keiron and N. Ryan. “An Introduction to Convolutional Neural Networks”. ArXiv, abs/1511.08458, 2015.
M. Domor, S. Theo and O. George. “Recurrent Neural Networks: A Comprehensive Review of Architectures, Variants, and Applications,” Information, vo. 15, no. 9, pp. 517, 2024.
R. C. Staudemeyer and E. R. Morris, “Understanding LSTM -- A Tutorial into Long Short-Term Memory Recurrent Neural Networks”, ArXiv, abs/1909.09586, 2019.