Mind Care Solution Through Human Facial Expression Manuscript Received: 17 January 2024, Accepted: 16 February 2024, Published: 15 September 2024, ORCiD: 0009-0006-3101-524X, https://doi.org/10.33093/jetap.2024.6.2.2
Main Article Content
Abstract
Using proposed system psychologists can use technology to make decisions which can provide ease for both patients and psychologists. Psychologists can check the progress of patients by analysing emotions reports of patient over time. Using historical data and emotion detection technology psychologists can make more accurate decisions. Using proposed system patient and psychologists don’t have to go to anywhere they only need a device and internet. Based on the characteristics of patient emotion psychologist only need report generated by system and prescribe medicine in emergency situation. Proposed system improves consultancy method by using machine learning emotion detection algorithm. Proposed system detects facial emotion of patient by using CNN with HAAR cascade classifier. We use FER 2013 dataset to train our model. We use VGG 19 architecture to train our model for optimization function to enhance the accuracy of model. We use RELU. We use DJANGO framework for integration with frontend. Result of our model on dataset 82.3 % after find tuning the accuracy goes to 82.3 % to 92 %. We use recall and F1 method to check the performance of model. We trained model on the testing dataset which have gray scale images and 48*48 pixel images to achieve his performance. To achieve our accuracy goal, we split dataset into trainee validation and testing dataset. We use CNN and achieve 93 % accuracy in our system which help patient to get feedback only selected question and psychologist. Patients select psychologist to answer questions of psychologist system stores emotions of patient against every question to generate emotion report. Psychologist can analyze emotion report to provide better prescription to patient.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
M. F. Valstar, M. Mehu, B. Jiang, M. Pantic and K. Scherer, “Meta-analysis of The First Facial Expression Recognition Challenge,” IEEE Trans. Sys., Man, and Cybernet., Part B (Cybernetics), vol. 42, no. 4, pp. 966-979, 2012.
N. Sebe, M. S. Lew, Y. Sun, I. Cohen, T. Gevers and T. S. Huang, “Authentic Facial Expression Analysis,” Image and Vision Comput., vol. 25, no. 12, pp. 1856-1863, 2007.
H. W. Ng, V. D. Nguyen, V. Vonikakis and S. Winkler, “Deep Learning for Emotion Recognition on Small Datasets using Transfer Learning,” in Proc. of the 2015 ACM on Int. Conf. Multimodal Interaction, pp. 443-449, 2015.
Y. Fan, X. Lu, D. Li and Y. Liu, “Video-based Emotion Recognition using CNN-RNN and C3D Hybrid Networks,” in Proc. of the 18th ACM Int. Conf. Multimodal Interaction, pp. 445-450, 2016.
G. Levi and T. Hassner, “Emotion Recognition in The Wild via Convolutional Neural Networks and Mapped Binary Patterns,” in Proc. of the 2015 ACM on Int. Conf. Multimodal Interaction, pp. 503-510, 2015.
S. E. Kahou, X. Bouthillier, P. Lamblin, C. Gulcehre, V. Michalski, K. Konda and Y. Bengio, “Emonets: Multimodal Deep Learning Approaches for Emotion Recognition in Video,” J. Multimodal User Interfaces, vol. 10, pp. 99-111, 2016.
M. Matsugu, K. Mori, Y. Mitari and Y. Kaneda, “Subject Independent Facial Expression Recognition with Robust Face Detection using A Convolutional Neural Network,” Neural Netw., vol. 16, no. 56, pp. 555-559, 2003.
M. Liu, S. Li, S. Shan and X. Chen, “Auaware Deep Networks for Facial Expression Recognition,” in 10th IEEE Int. Conf. and Workshops on Automatic Face and Gesture Recogn., pp. 1-6, 2013.
B. Jiang, M. F. Valstar and M. Pantic, “Action Unit Detection using Sparse Appearance Descriptors in Space-Time Video Volumes,” in 2011 IEEE Int. Conf. Automatic Face & Gesture Recogn., pp. 314-321, 2011.
H. A. Alabbasi, P. Moldoveanu and A. Moldoveanu, “Real time Facial Emotion Recognition using Kinect V2 Sensor,” IOSR J. Comput. Eng. Ver. II, vol. 17, no. 3, pp. 2278-2661, 2015.
A. V. Haridas, R. Marimuthu and B. Chakraborty, “Emotion Recognition System for Specially Needed People with Optimized Deep Learning Algorithm,” in Fourth Int. Conf. Inventive Sys. and Contr., pp. 115-121, 2020.
V. D. Rangari, “Dominant Facial Emotion Recognition using MTCNN in Python,” Int. Res. J. Modern. Eng. Technol. and Sci., vol. 4, no. 10, pp. 650-653, 2022.
Y. Said and M. Barr, “Human Emotion Recognition Based on Facial Expressions via Deep Learning on High-Resolution Images,” Multimedia Tools and Appl., vol. 80, no. 16, pp. 25241-25253, 2021.
C. A. Corneanu, M. O. Simón, J. F. Cohn and S. E. Guerrero, “Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-related Applications,” IEEE Trans. Pattern Analy. and Mach. Intellig., vol. 38, no. 8, pp. 1548-1568, 2016.
A. Sajjanhar, Z. Wu and Q. Wen, “Deep Learning Models for Facial Expression Recognition,” in 2018 Digital Image Comput.: Techniq. and Appl., pp. 1-6, 2018.
N. Zeng, H. Zhang, B. Song, W. Liu, Y. Li and A. M. Dobaie, “Facial Expression Recognition via Learning Deep Sparse Autoencoders,” Neurocomputing, vol. 273, pp. 643-649, 2018.
M. K. Chowdary, T. N. Nguyen and D. J. Hemanth, “Deep Learning-Based Facial Emotion Recognition for Human–Computer Interaction Applications,” Neural Comput. and Appl., vol. 35, pp. 23311-23328, 2021.