https://journals.mmupress.com/index.php/jiwe/issue/feed Journal of Informatics and Web Engineering 2025-10-14T05:31:37+08:00 Prof. Dr. Su-Cheng Haw sucheng@mmu.edu.my Open Journal Systems <p>The<strong> Journal of Informatics and Web Engineering (JIWE) i</strong>s an international, peer-reviewed journal that advances the engineering of user-centric, web-native information systems. We published original research, reviews, and case studies that unite informatics with rigorous web-engineering methods across the full lifecycle, from requirements and design to deployment and evolution.</p> <p> </p> <p>eISSN:<strong> 2821-370X | </strong>Publisher: <a href="https://journals.mmupress.com/"><strong>MMU Press</strong></a> | Access: <strong>Open</strong> | Frequency: <strong>Triannual (Feb, June &amp; October)</strong> effective from 2024 ; Frequency: <strong>Quarterly (Jan, Apr, July &amp; October)</strong> effective from 2026| Website: <strong><a href="https://journals.mmupress.com/jiwe">https://journals.mmupress.com/jiwe</a></strong></p> <p>Indexed in:<br /><a style="margin-right: 10px;" href="https://myjurnal.mohe.gov.my/public/browse-journal-view.php?id=1038" target="_blank" rel="noopener"><img style="width: 112px; display: inline;" src="https://journals.mmupress.com/resources/myjurnal-logo.png" alt="" width="200" height="26" /></a> <a style="margin-right: 10px;" href="https://journals.mmupress.com/index.php/jiwe/management/settings/context/#" target="_blank" rel="noopener"><img style="width: 95px; display: inline;" src="https://journals.mmupress.com/resources/mycite-logo.jpg" alt="" width="200" height="34" /></a><a style="margin-right: 10px;" href="https://search.crossref.org/search/works?q=2821-370X&amp;from_ui=yes"><img style="display: inline;" src="https://assets.crossref.org/logo/crossref-logo-landscape-100.png" /></a><a style="margin-right: 10px;" href="https://scholar.google.com/scholar?hl=en&amp;as_sdt=0%2C5&amp;q=2821-370X&amp;btnG="><img style="display: inline; width: 137px;" src="https://journals.mmupress.com/resources/google-scholar-logo.png" /></a><a style="margin-right: 10px;" href="https://www.ebsco.com/"><img style="display: inline; width: 100px;" src="https://journals.mmupress.com/resources/ebscohost-logo.png" /></a> <a style="margin-right: 10px;" href="https://www.doaj.org/toc/2821-370X"><img style="width: 89px; display: inline;" src="https://journals.mmupress.com/resources/doaj-logo.jpg" alt="" width="200" height="22" /></a><a style="margin-right: 10px;" href="https://openalex.org/works?page=1&amp;filter=primary_location.source.id:s4387278993"><img style="display: inline; width: 100px;" src="https://journals.mmupress.com/resources/openalex-logo.png" /></a></p> https://journals.mmupress.com/index.php/jiwe/article/view/2460 Editorial Preview for October 2025 Issue 2025-10-05T13:50:22+08:00 Su-Cheng Haw sucheng@mmu.edu.my Hui-Ngo Goh hngoh@mmu.edu.my <p>The October 2025 issue of the Journal of Informatics and Web Engineering (JIWE) concludes the journal’s triannual publication cycle and precedes its transition to a quarterly schedule commencing in 2026. This issue presents 21 research papers within its regular section, covering a wide range of areas including Artificial Intelligence (AI), Machine Learning (ML), Software Engineering, Recommender Systems, Cybersecurity, and Web Technologies. The collection reflects JIWE’s ongoing effort to advance research in the informatics and web engineering domain. A special thematic section, guest-edited by Prof. Ts. Dr. Hairulnizam Bin Mahdin, centers on “AI-Enhanced Computing and Digital Transformation”. The featured studies examine the role of AI in driving organizational efficiency, innovation, and sustainable digital growth. Several papers in this issue also align with the United Nations Sustainable Development Goals (SDGs)—notably SDG 8 (Decent Work and Economic Growth) through studies on intelligent automation, SDG 9 (Industry, Innovation, and Infrastructure) via research on digital platforms and system design, and SDG 11 (Sustainable Cities and Communities) through innovations supporting smart and sustainable urban development. Beginning in January 2026, JIWE will adopt a quarterly publication schedule (January, April, July, and October), reflecting the journal’s steady expansion and continued commitment to disseminating rigorous, high-impact research in informatics and web engineering.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1722 Pavement Distress Analysis in Malaysia: A Novel DeepSeg-CrackNet Model for Crack Detection and Characterization Using Real-World Data 2025-04-13T11:41:34+08:00 Arselan Ashraf arselan@mmu.edu.my Ali Sophian ali_sophian@iium.edu.my Teddy Surya Gunawan tsgunawan@iium.edu.my Syed Asif Ahmad Qadri syedasif@m110.nthu.edu.tw <p>Pavement distress analysis plays a big role in keeping roads in good shape, especially in busy spots like Selangor and Kuala Lumpur, where heavy traffic and tropical weather make them wear out fast. This work introduces DeepSeg-CrackNet, a fresh hybrid deep learning model that uses Deep Gradient ResNet to spot cracks and a Residual block with a Modified Attention Mechanism to sort them into types, making it simpler to detect and label pavement damage. The model was trained on real data collected from Malaysian roads, with the CRACK500 dataset added in to cover more situations, and captured using a GoPro Hero 8 mounted on a vehicle, with GPS mapping keeping everything clear and easy to trace. DeepSeg-CrackNet performs really well—it hits a Mean IoU of 0.8388889 for segmentation and scores 85% accuracy in classifying cracks like alligator, longitudinal, and transverse, with precision ranging from 0.84 to 0.89, and recall between 0.80 and 0.96. It also measures cracks in meters or square meters, which helps in planning repairs smartly, like replacing big alligator cracks or sealing smaller longitudinal ones to save resources. Compared to models like CrackNet, DeepSeg-CrackNet stands out, especially for alligator cracks, with a precision of 0.84 and recall of 0.96, beating CrackNet’s 0.778 and 0.772. In the end, DeepSeg-CrackNet makes it easier to manage Malaysia’s roads in a data-driven way, improving safety and ensuring longer-lasting infrastructure through smarter, proactive repair approaches that enhance city travel.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1579 Enhancing Zero Trust Cybersecurity using Machine Learning and Deep Learning Approaches 2025-03-01T18:41:30+08:00 Danial Haider danial.haider@au.edu.pk Shougfta Mushtaq shouguftawajid@yahoo.com Hasnat Ali hasnat.ali@riphah.edu.pk Mazliham Mohd Su’ud mazliham@mmu.edu.my <p>The recent Zero-Trust Architecture (ZTA) is progressively adopted to the develop network security by assuming no implicit trust within or outside an organization’s boundary. Though, ZTA faces substantial challenges in detecting sophisticated and developing cyber threats, particularly due to its trust on traditional security mechanisms that struggle to manage internal threats and sophisticated attack techniques. To report these shortcomings, the proposed study discovers the combination of advanced machine learning (ML) and deep learning (DL) performances to improve the anomaly detection proficiencies within ZTA environments. The study develops the CICIDS2017 dataset, which contains diverse and realistic network traffic patterns, to assess the efficiency of nine different models: Naïve Bayes, Logistic Regression, Random Forest, Decision Tree, Gated Recurrent Unit (GRU), Multi-layer Perceptron (MLP), Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (Bi-LSTM), and Convolutional Neural Network (CNN). Concluded comprehensive investigation and performance evaluation, the study validates that ensemble methods such as Random Forest and Decision Tree, together with deep learning models like LSTM and GRU, significantly exceed conventional models in terms of accuracy and detection abilities. The best-performing models attained up to 99.99% accuracy in recognizing malicious network activity. This exceptional performance validates that the strong potential of participating intelligent learning-based methods into ZTA to create scalable and dynamic security solutions with high accuracy. These findings illustrate the value of ML/DL in enhancing the threat detection layer of ZTA, eventually providing a stronger resistance to advanced attacks cyber threats<em>.</em></p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1636 The Role of Generative AI in e-Commerce Recommender Systems: Methods, Trends and Insights 2025-05-12T11:16:46+08:00 Kai-Ze Liau liaukaize@gmail.com Heru Agus Santoso heru.agus.santoso@dsn.dinus.ac.id <p>Recommender systems have existed for decades, shaping how people consume digital content, receive information, and engage in day-to-day activities, among others. Undoubtably, recommender systems also play a crucial role in e-commerce applications as well, with industry players like Amazon, AliBaba, eBay using recommender systems within their ecosystems to give suitable and value-driven insights. However, recommender systems face some main concerns such as data sparsity, cold-start problems and so on. As a result, research is currently ongoing to solve these issues and provide high-quality recommendations to consumers. This review aims to identify prevailing gaps surrounding these issues by analysing existing research on generative Artificial Intelligence (AI) recommender systems within an e-commerce context. It explores the underlying framework of common generative AI techniques such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Transformers, diffusion models and so on. VAEs and Transformers hold great potential within e-commerce as noted by most researchers due to their ease of training and qualitative generations. This review intends to enhance recommender systems better to improve the quality of life of digital users, providing better recommendations in e-commerce as well as maximizing the value of stakeholders. It also includes potential future work for researchers to advance existing knowledge in this sector.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1467 Training the Brain: A Machine Learning Approach to Predicting Wellbeing Through Intentional Thought Pattern Modification 2025-02-02T10:15:27+08:00 Sellappan Palaniappan sellappan.p@help.edu.my Rajasvaran Logeswaran logeswaran.nr@help.edu.my Kasthuri Subaramaniam s_kasthuri@um.edu.my Oras Baker o.alhassani@rave.ac.uk Bui Ngoc Dung dnbui@utc.edu.vn <p>This study provides a quantitative framework for wellbeing outcome prediction through intentional cognitive pattern alteration. We demonstrated 81.67% accurate prediction of wellbeing states, in a three-level classification (Low, Medium, High), using a Random Forest classifier with 16 features from psychological, physiological, and behavioural metrics. Our model singles out the gratitude cultivation (21.3%) and peace duration (23.7%) as the strongest predictors of positive well-being outcomes, which provides empirical support to traditional approaches of cognitive training with empirical evidence. Analysis of 1,000 synthetic cases shows that consistent practice of positive thought patterns over 3-6 months can strongly shift wellbeing states, with key behavioural markers showing progressive improvement which include increased joy moments, reduced anxiety episodes, and enhanced sleep quality. Our results establish that cognitive training outcomes can be quantitatively tracked and predicted with meaningful accuracy, hence providing a data-driven approach to mental health intervention design. Additionally, the research shows machine learning for mental health analysis to present a scalable method for wellbeing prediction. Integrating multiple data modalities, our model presents an integrative view of cognitive transformation that covers the gap between qualitative opinion and quantitative prediction. The contribution of this research is in presenting the viability of applying artificial intelligence (AI) models to facilitate enhanced mental health interventions through adaptive and personalized cognitive training programs. More generally, our results add to the emerging science of neuroplasticity-based cognitive training by delivering an evidence-based method for evaluating and predicting wellbeing improvement. The findings have implications that reach outside the research clinic, to clinical interventions, self-help programs, and mobile phone health applications, to offer a new mechanism for improving mental resilience and world life satisfaction through rigorous cognitive training.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1694 Hybrid-Based Movie Recommender System: Techniques, Case Studies, Evaluation Metrics, and Future Trends 2025-04-21T09:24:09+08:00 Cheng-Yung Lai laichengyuny@gmail.com Noramiza Hashim noramiza.hashim@mmu.edu.my Lucia Dwi Krisnawati krisna@staff.ukdw.ac.id <p>The necessity for sophisticated recommender systems in the movie recommendation sphere has become particularly pronounced, generating a more personalized movie recommendation due to people nowadays who like to watch movies online. Efficient recommender systems make use of advanced machine learning (ML) techniques in the pursuit of accurate and meaningful recommendations. This paper endeavours to give a comprehensive overview of technologies known as recommender systems, concentrating on ML methods found at the base. Different strategies have been applied in this work, which include collaborative filtering (CF), content-based filtering (CB), hybrid approaches, Generative AI and so on. The merits and demerits of each technique are listed and explained briefly. In addition, the actual application’s results are also presented in this paper. To evaluate the performance of the techniques, some of the important datasets that are used in evaluating recommender systems are also discussed along with measurement metrics to determine the effectiveness of technique. Example metrics used are Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and so on. This paper synthesizes existing research to evaluate the advantages and limitations of diverse recommendation techniques, which aims to give suggestion on how to improve the design of movie recommendation systems so that the performance of the technique can be improved.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1596 Hybrid Filtering for Personalized and Health-Conscious Recipe Recommendations in UniEats 2025-03-31T03:18:14+08:00 Hui Lek Liew huilekliew@student.usm.my XinYing Chew xinying@usm.my Khai Wah Khaw khaiwah@usm.my Shiuh Tong Lim shiuhtong1997@gmail.com <p>This research paper introduces UniEats, a recipe recommendation website designed to help university students better organize their meals and adopt healthier eating habits. The system aims to address common challenges students face, including time constraints, limited cooking skills, and insufficient nutritional awareness, which often lead to unhealthy food choices. At the core of UniEats is its recommendation engine, which employs a hybrid filtering approach, combining content-based and collaborative filtering techniques to provide personalized recipe suggestions based on users' dietary preferences, rating history, and recipe attributes. UniEats offers a range of features, including recipe search, weekly meal planning, nutritional analysis through dashboards, and automatic grocery list generation. By enabling students to explore diverse culinary options, create balanced meal plans, and understand the nutritional content of their meals, UniEats empowers them to make informed dietary decisions. This research paper discusses the project's background, motivation, and objectives, emphasizing the importance of addressing students' dietary challenges. It also reviews existing recommendation systems and algorithms, justifying the choice of hybrid filtering for personalized meal suggestions. Additionally, the research paper details the system's design, implementation, and testing procedures, highlighting the development process. UniEats is a practical solution that leverages machine learning and data-driven methods to enhance students' culinary experiences, support skill development, and promote nutritional awareness. By tackling key challenges in meal planning and healthy eating, UniEats aims to improve the overall well-being of university students.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1808 Implementation of Lightweight Machine Learning Models for Real-time Text Classification on Resource-Constrained Devices 2025-04-21T16:57:07+08:00 Marwah Zaid Mohammed Al-Helali marwahalhelali@gmail.com Naveen Palanichamy p.naveen@mmu.edu.my K. Revathi neyadharshini@gmail.com <p>This paper addresses the growing need for implementing intelligent Natural Language Processing (NLP) systems on low-power, memory-limited devices such as Raspberry Pi, mobile phones, and IoT edge hardware. As edge computing and smart devices proliferate, there is an urgent need for more advanced NLP technology that does not require constant cloud access and is efficient in computing and provides results in real time. While deep learning and cloud-based models typically offer high text-classification accuracy and have demonstrated exceptional performance across a range of NLP tasks, they are often too resource-intensive for real-time deployment in constrained environments. To overcome these limitations, we explore a set of lightweight machine learning (ML) models—Multinomial Naive Bayes, Logistic Regression, and Decision Tree—to perform sentiment classification on a subset of the Amazon Reviews Polarity dataset. Following thorough data preprocessing and Term Frequency-Inverse Document Frequency (TF-IDF) vectorization, two optimization techniques are employed: feature selection via Chi-Squared tests and simulated post-training quantization. Our experimental results show that resource consumption can be substantially reduced, with minimal accuracy loss, thereby demonstrating feasibility for edge-based text analytics and offline functionality. We provide a detailed comparative analysis that highlights how classical ML models remain viable in scenarios where modern deep learning architectures cannot be efficiently deployed.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1657 Review on Secure and Efficient IoT-based Healthcare System with the Integration of Machine Learning and Firewalls 2025-04-13T22:17:40+08:00 Muhammad Awais mawaiskhan1808@gmail.com Syeda Samar Fatima samar.fatima@cust.edu.pk Jawaid Iqbal jawaid.iqbal@riphah.edu.pk <p>The integration of the Internet of Things (IoT) into healthcare would mean a revolutionized approach in patient monitoring, diagnosis, and treatment, making this quite some development in healthcare delivery. This review has focused on how the integration of IoT with Machine Learning (ML) and stringent security measures tackle the challenging situation of data privacy and cyber threats in healthcare. Current methodologies point toward how essential advanced sensors, cloud computing, and wireless technologies for IoT-based healthcare systems necessary to secure patient data. patient record kept in files and now forward to the cloud database system so that in any case of emergency it could access and keep safe from cyber-attacks, and no one can breach the security of data only authorized user can access. To achieve this security, concern firewalls, encryption technologies are used. These protection systems are applied to block unauthorized access, protect data communication channels, and make private patient information confidential always. IoT-based, ML-enabled systems perform way better in real-time monitoring, predictive analysis, and personalized treatment in contrast with conventional healthcare strategies. This discussion delineates the need for implementation of firewalls and encryption techniques for data security and patient privacy. This critical review underlines that while IoT truly has enormous potential to change healthcare, it will require continuous innovation and rigorous security protocols to help maximize these benefits.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1705 Addressing IoT Security Challenges through Advanced Machine Learning and Encryption 2025-05-05T09:21:49+08:00 Ahmad Aziem Khushairi Anuar Khushairi.anuar@student.iium.edu.my Ahmad Anwar Zainuddin mr.anwarzain@gmail.com Ahmad Adlan Abdul Halim a.adlan@live.iium.edu.my Dek Rina dekrina@mhs.usk.ac.id <p>The rapid growth of Internet of Things (IoT) devices, including smartwatches, home assistants, and connected appliances, has brought significant convenience to daily life, but it has also introduced serious security challenges. These devices often transmit sensitive data, making them vulnerable to theft, misuse, and unauthorized access. Current security measures are insufficient to address the complex and evolving nature of IoT systems, leaving many of them exposed to potential breaches and cyberattacks. This review explores recent developments in IoT security, focusing on how advanced technologies, such as machine learning, can be utilized to enhance the protection of IoT systems. The main objective of this paper is to examine potential solutions to the security problems that arise in IoT environments. It includes a thorough analysis of recent research and technological innovations in the field, with a particular emphasis on how different security methods are applied across IoT systems. By identifying the most common security vulnerabilities and outlining their impact on IoT networks, the review suggests improved methods to safeguard IoT data and ensure privacy. The findings aim to support researchers, developers, and businesses in designing more secure IoT solutions, and contribute to the establishment of stronger data protection policies. Ultimately, the review serves as a resource for those seeking to enhance the security of IoT devices and systems in an increasingly interconnected world.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1654 Research Article Recommender Systems: A Comprehensive Review of Models, Approaches and Evaluation Metrics 2025-05-19T20:04:29+08:00 Sir-Yuean Lim limsiryuean000@gmail.com Noramiza Hashim noramiza.hashim@mmu.edu.my Lanh Le Thanh lethanhlanh@dntu.edu.vn <p>With the advent of the current digital era, individuals across the developed world are commonly equipped with devices that can access vast amounts of information at their fingertips. What was considered an impossible feat was realized through remarkable technological advancements. This positive transformation has had a profound impact on education, where traditional knowledge management, such as libraries, are no longer a primary determinant of a student’s academic success. Instead, it has been replaced by the internet as a medium for learning, practicing, and topic exploration. However, the sheer volume of the ever-increasing information available online can easily overwhelm a user, particularly when conducting detailed research on a specific topic. Therefore, the need for a reliable research article recommender system cannot be understated, helping students and researchers to navigate the expansive knowledge space better and achieve their learning and research objectives. This review paper aims to study the most common types of recommendation system techniques in research articles recommender systems (RS). A total of ten related works and relevant evaluation metrics written by other researchers will be studied and accessed rigorously using comparative analysis, granting further insights into the current work similar or related to the domain of this paper. Finally, this paper will identify and elaborate their current trends and gaps in the discussion section.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1544 Optimised Heterogeneous Handover in Mobile IPV6 Using Enhanced Predictive Fast Proxy with Media Independent Handover (MIH) Support 2025-05-20T12:20:08+08:00 Aliyu Aminu Abdulhadi aliyuabdulhadi@gmail.com Nura Mukhtar nura.mukhtar@umyu.edu.ng <p>In wireless networks, handover performance is essential for enabling real-time traffic applications. Long handover delays make it impossible for a Mobile Node (MN) to send and receive data packets, which is very undesirable for real-time applications like video conferencing and VoIP. Therefore, in order to guarantee better handover performance, decreasing handover duration is crucial. The Internet Engineering Task Force (IETF) has standardized Fast Proxy Mobile IPv6 (FPMIPv6) as an enhancement to the novel Proxy Mobile IPv6 (PMIPv6) to attain better handover performance. FPMIPv6 functions in two modes: predictive and reactive, using a link-layer triggering mechanism. The predictive mode uses link-layer triggers to improve FPMIPv6's handover performance. However, FPMIPv6 experiences packet loss, signalling overhead and unable to manage heterogeneous handovers effectively because it needs a unified Layer 2 triggering mechanism which could result in handover accomplishment either early or late. Consequently, this research, provide an incorporation between FPMPV6 with MIH by expanding its current standard services. In addition, a new predictive handover architecture that generate timely link triggers using information from adjacent network was implemented, enabling crucial handover operations to finished prior to the present link deteriorating. And used piggyback technique to reduce signalling overhead. Performance analysis using simulations indicates the pro-FPMIPv6-MIH achieved improved handover performance, particularly in decreasing handover delay, packet loss and signalling overhead.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1760 Developing A Predictive Model for Football Players’ Market Value Using Machine Learning 2025-04-11T21:07:07+08:00 Muhammad Afif Jazimin Idris 1211103419@student.mmu.edu.my Sew Lai Ng slng@mmu.edu.my <p>Football is the world’s most popular sport, and evaluating the market value of players is crucial for clubs and managers in making informed decisions regarding transfers, contracts, and financial planning. This study aims to develop a predictive model to estimate the market value of football players using machine learning (ML) algorithms and real-life statistics performance data from the top five European leagues such as English Premier League, Italian Serie A, Spanish La Liga, German Bundesliga, and French Ligue 1 between the 2017/18 and 2019/20 seasons. By reviewing past research, various ML methods such as Random Forest, LightGBM, XGBoost, and Gradient Boosting Decision Tree (GBDT) are developed. Data preprocessing techniques, including data cleaning, feature selection, feature encoding, splitting, and standardization, are applied to ensure data quality and consistency. To tune the hyperparameter of the models, RandomizedSearchCV is applied alongside cross validation. The model evaluation is conducted using regression metrics such as mean absolute error (MAE), root mean squared error (RMSE), and coefficient of determination (R²), to determine the most accurate model. The best-performing model is further utilised to analyse the correlation between the features and market value, offering insights into the key features that significantly impact the market value for each position.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1710 Deep Learning Approaches to Autocorrelation Function and Signal-to-Noise Ratio Estimation in Noisy Images 2025-04-18T19:16:17+08:00 Kai Liang Lew 1132703002@student.mmu.edu.my Kok Swee Sim sksbg2022@gmail.com Shing Chiang Tan sctan@mmu.edu.my <p>Accurate estimation of signal-to-noise ratio (SNR) in Scanning Electron Microscopy (SEM) is crucial because it evaluates the image quality. SEM images faced a challenge whereby Gaussian noise commonly appears in the images. Thus, researchers have developed several methods to estimate the SNR value. With the introduction of deep learning, most of the limitations in the classical methods can be addressed. This paper proposes a novel deep learning, CNN-based Calibration Map Network (CalibNet) to estimate the SNR value from SEM images using a calibration map between classical SNR and autocorrelation function SNR. The architecture consists of convolutional layers, rectified linear unit (ReLU) activations, max-pooling layers, adaptive pooling, and a regression head to predict the SNR value correctly. The proposed model is trained, validated and tested on two SEM images, the Biofilm SEM dataset (67 images) and the NFFA-EUROPE SEM dataset (961 images). Each image was artificially corrupted with Gaussian noise variance ranging from 0.001 to 0.01 to simulate realistic SEM imaging conditions. The proposed model was compared with Classical SNR, Autocorrelation Function (ACF), Nearest Neighbour (NN)-ACF, First-Order Linear Interpolation (LI)-ACF, and Quadratic-Sigmoid (Quarsig)-ACF methods. The results show that CalibNet outperformed all the classical methods in terms of mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE) and R-squared (R²). Statistical analyses further confirmed that CalibNet predictions closely align with the Classical SNR values. Future work includes exploring more advanced model architectures, alternative calibration techniques, and real-time SNR estimation applications.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1680 Loan Default Prediction Using Machine Learning Algorithms 2025-05-12T10:50:46+08:00 Zhi Zheng Kang kzheng975@gmail.com Sin Yin Teh tehsyin@usm.my Samuel Yong Guang Tan ygtan@tarc.edu.my Wei Chien Ng ngweichien@usm.my <p>Financial institutions constantly face at the risk of default by borrowers which can result in significant financial losses. It is essential to develop an appropriate predictive model for loan default to reduce these risks and minimise financial losses. The objective of this study is to identify the most suitable machine learning model to predict loan default by comparing four models which are Random Forest, Decision Tree, Extreme Gradient Boosting (XGBoost), and Light Gradient Boosting Machine (LightGBM). Additionally, it also examines the key features influencing loan default prediction. The dataset used in this study is sourced from Kaggle and it consists of 148,670 rows with 34 features. As class imbalance is common in the model prediction, Synthetic Minority Over-sampling Technique (SMOTE) is applied during model training to enhance predictive performance. Model performance is evaluated using five significant assessment metrics: accuracy, precision, F1-score, recall, and the area under the receiver operating characteristic curve (ROC AUC). The outcomes indicate that LightGBM performs the best among the other models with the highest accuracy (0.9764), in addition to precision (0.9747) and recall (0.9503) scores. Feature importance analysis is conducted by using permutation importance. It identifies interest, credit type, interest rate spread, and upfront charges as the four most significant features of loan default. These findings provide useful information for financial institutions aiding risk assessment and decision-making to mitigate potential losses.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1541 Indonesian Language Sign Detection using Mediapipe with Long Short-Term Memory (LSTM) Algorithm 2025-03-29T09:50:27+08:00 Wargijono Utomo wargiono@unkris.ac.id Yogasetya Suhanda yogasetyas@swadharma.ac.id Harun Ar-Rasyid harun@swadharma.ac.id Andy Dharmalau andy.d@swadharma.ac.id <p>People with disabilities mostly communicate using sign language, but the public still has little understanding of the Indonesian Sign Language System (ISLS). This causes obstacles in daily interactions. Advances in artificial intelligence technology, especially artificial neural networks, open opportunities in sign language recognition, but are still in the development stage. This study aims to build a ISLS sign language recognition model using the LSTM approach and MediaPipe Hands. The method of collecting hand keypoint data, 25 sequences per gesture, and 36 alphabetic and numeric gestures. The dataset is divided into three categories, namely 80% training, 10% validation, and 10% testing. The model developed to handle sequential data from hand gestures using the LSTM architecture. The results of the study can be shown model accuracy of 97.1%, average macro precision of 97%, recall of 96.6%, and F1-score of 96.4% and weighted average precision of 97.4%, recall of 97.1%, and F1-score of 97%. The results show that the combination of LSTM and MediaPipe can detect ISLS gestures with high accuracy. This can be used as a potential solution in automatic sign language translation, so that this model can improve the inclusiveness of communication for people with disabilities. Further research can be developed using a more accurate hand recognition framework, as well as improving data pre-processing, and exploring deep learning (DL) methods such as SSD, YOLO, or Faster-RCNN. In addition, pose and facial recognition can be added to improve accuracy in gesture recognition more comprehensively.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1635 Mapping Relational Database to Full-Text XML for Open Journal System Cross-Platform Article Distribution 2025-04-24T19:28:40+08:00 Chee-Xiang Ling cheexiangling@gmail.com Kok-Why Ng kwng@mmu.edu.my Heru Agus Santoso heru.agus.santoso@dsn.dinus.ac.id <p>In academic publications, the automation of full-text eXtensible Markup Language (XML) is increasingly essential, as generating full-text XML for article distribution is a complex and time-consuming process that requires metadata extraction from a relational database and transformation into hierarchical structures such as Journal Article Tag Suite (JATS). The lack of automation in this transformation process may cause inconsistencies and inaccuracies and may cause errors due to human error. The primary aim is to develop an automation system for transforming metadata from a relational database to full-text XML by reducing errors and speeding the process of generating full-text XML. This is crucial since the demand for automation has been increasing year by year. Furthermore, the motivation behind this research is the growing adoption of the Open Journal System (OJS), one of the popular platforms for managing scholarly journals. It supports a relational database to store the metadata and article information. Therefore, developing an automated system is essential for transforming this structured metadata to full-text XML. To address this issue, various techniques for mapping will be explored to enable the transformation of relational database structures into full-text XML formats. The proposed method involves metadata extraction, mapping logic, and various validation mechanisms to ensure the XML is structured and the accuracy of it. The preliminary result indicates that the metadata has been successfully mapped from a relational database to XML. However, the JATS-specific tagging has not yet been implemented and will be addressed in future work. This research is significant to the publication community, as it brings convenience by reducing some manual work and ensuring metadata standardization.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1765 Exploring Generative AI Recommender Systems in E-Commerce: Model, Evaluation Metric, and Comparative Review 2025-04-11T09:57:02+08:00 Wan-Er Kong kong.wan.er@student.mmu.edu.my Tong-Ern Tai TAI.TONG.ERN@student.mmu.edu.my Palanichamy Naveen p.naveen@mmu.edu.my Kok-Why Ng kwng@mmu.edu.my Lucia Dwi Krisnawati krisna@staff.ukdw.ac.id <p>Generative Artificial Intelligence (GAI) is changing what can be done with Recommender Systems (RS) in e-commerce by allowing much more interactive, situationally aware, and highly tailored experiences for users. The purpose of this paper is to provide overall insight into how GAI, including Large Language Models (LLMs), Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and other emerging methods, is affecting the building and running of modern e-commerce RS. This paper classifies generative models into groups based on the type of models used, data modality, and specific domain of application. Their involvement in tasks such as personalized product ranking, content generation, and cold-start problem avoidance is discussed comprehensively as well. In addition, we also analyse innovation in design trends, practical challenges, such as explainability, real-time adaptability, computational scalability, and possible trade-offs, as well as pathways ahead through the lens of current literature and empirical systems. By contrasting GAI-RS with traditional RS, we highlight their advantages in handling several problems, such as data sparsity, generating diverse recommendations, and enabling dynamic user interaction. This paper should serve to broaden awareness among scholars and practitioners about the ever-changing convergence of GAI and intelligent recommendation structures within e-commerce, emphasizing both their transformative potential and operational complexities in practice.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1638 User Behaviour Prediction in E-Commerce Using Logistic Regression 2025-05-26T13:52:48+08:00 Wei-Wen Lee 1211103858@student.mmu.edu.my Noramiza Hashim noramiza.hashim@mmu.edu.my Shaymaa Al-Juboori shaymaa.al-juboori@plymouth.ac.uk <p>From a psychological perspective, human behaviour reflects underlying thoughts and decision-making patterns, for example, consumer behaviour may correlate with the purchase decisions. In the fast-evolving e-commerce industry, predicting user behaviour is essential for enhancing marketing strategies, improving customer experiences, and increasing sales. However, traditional heuristic (e.g. market basket analysis) approaches to analyse buyer behaviour are often rigid and fail to adapt to complex consumer interactions. This research work develops a predictive model that analyses user behaviour based on data such as historical purchasing patterns and demographic attributes. Based on a review of previous studies, Logistic Regression (LR) is utilized as the primary machine learning algorithm to estimate the likelihood of user performing specific actions including churning and conversion rate. The dataset undergoes preprocessing steps, including data cleaning, feature selection, and normalization, to enhance model accuracy. Evaluation metrics, including accuracy, confusion matrix, precision, recall and F1-Score are used to ensure the model’s performance is reliable and effective. Unlike traditional heuristic approaches, this data-driven method offers a scalable and adaptable solution for behaviour prediction. The findings of this research have the potential to revolutionize e-commerce by providing businesses with actionable insights into consumer behaviour. By leveraging predictive analytics, companies can implement targeted marketing campaigns, personalize recommendations, and improve customer retention strategies. Additionally, this study highlights the significance of behavioural modelling in detecting early signs of customer churn, allowing businesses to take proactive measures. Ultimately, this research contributes to the growing field of data-driven decision-making, offering a scalable and adaptable solution for understanding and predicting user behaviour in online shopping environments.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1770 Identifying the Barriers to Digital Financial Inclusion in The Most Financially Excluded Country Using Machine Learning Algorithm 2025-04-24T07:41:19+08:00 Yin Ting Chin yinting0221@gmail.com Hui Shan Lee hslee@utar.edu.my <p>Despite the call of digital financial services (DFS) to improve inclusive growth and reduce poverty, the adoption of DFS remains low in Nigeria. The objective of this study is to examine the barriers of ability, access and usage of DFS in Nigeria. This study uses secondary data Global Findex year 2017 and year 2021 to predict the socioeconomic factors on the target variables of DFS (ability, access and usage). Using a machine learning (ML) algorithm, namely the J48 decision tree in the Waikato Environment for Knowledge Analysis (WEKA) software, this study analyses the predictive strength of variables such as gender, education, income quintile, employment status, and urbanicity in determining ability, access to and usage of DFS. The main findings from the results show that the J48 decision tree demonstrates improvement in correctly classifying instances for year 2017 data to the year 2021 data. The root nodes for all sets of data show that education is the main predictor for DFS. The first-level split is gender for DFS when the target variables are ability and usage but is age when the target variable is access. Results show that education is the main barrier of DFS whereas gender and age are the secondary impediments to the adoption of DFS. Policymakers can benefit from the findings of this study to design targeted interventions—such as increasing their education level and organizing more digital financial literacy programs to accelerate DFS adoption among marginalized groups. The novelty of this study is to utilize a ML algorithm to identify the barriers of DFS and its accuracy rate has increased from the results of using the year 2017 data to the year 2021 data. By exploring key determinants through ML, this study contributes to the broader agenda of financial inclusion and promotes the accomplishment of sustainable development goals.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1665 Robust Lane Detection under Varying Lighting Conditions Using Adaptive Vision-Based Techniques 2025-04-20T21:26:01+08:00 Habibatelrahman Mosaad Abdelfattah Mohamed Abdelaal 1211300789@student.mmu.edu.my Tee Connie tee.connie@mmu.edu.my Michael Kah Ong Goh michael.goh@mmu.edu.my <p>Reliable lane detection is crucial to autonomous driving but continues to be challenging with varying lighting conditions. Fluctuations in illuminations due to bright sunlight, shadows or low lighting at night can degrade the visual quality and adversely affect the accuracy of the lane detection results. This research proposes an adaptive approach for lane detection under different lighting scenarios. For daytime, a Region of Interest (ROI) masking and line averaging technique help in the stability and visibility of the lane markings. For nighttime conditions, a Probabilistic Hough Transform-based method improves lane detection in low-light environments. An evaluation tool has been developed to check if certain parameters correlate with day or night to enable dynamic selection of the most suitable detection technique. The proposed new method improves image preprocessing and combines several computer vision algorithms for accurate lane tracing. This new solution aids in shadow regions and faded marking areas, as well as improves precision for multi-lane roadways with varying lane widths. The approach adds accuracy for real-time lane recognition of autonomous vehicles on multi-lane highways with different degrees of illumination. This research also contributes toward the goal of improving safety and efficiency in autonomous driving by providing more effective methods of ensuring safe driving.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/2363 Editorial: AI-Enhanced Computing and Digital Transformation 2025-09-03T11:39:56+08:00 Hairulnizam Mahdin hairuln@uthm.edu.my <p>Artificial intelligence (AI), machine learning (ML), and advanced computing techniques are now central to the digital transformation journey, reshaping industries, academia, and society. This special issue of the Journal of Informatics and Web Engineering on AI-Enhanced Computing and Digital Transformation brings together contributions that reflect both technical innovation and societal applications. The featured articles span optimization methods, software quality improvements, data augmentation techniques, intelligent mobile applications, blockchain-based governance systems, and disaster management platforms. Together, they illustrate how computational advances not only strengthen efficiency and accuracy but also enable resilience in the face of global challenges. From a technical standpoint, metaheuristic algorithms, hybrid learning models, and refactoring strategies are pushing the boundaries of optimization and software reliability. At the data level, challenges such as imbalance, redundancy, and scalability are being addressed through novel augmentation and hybridization techniques, ensuring more robust predictions. Beyond computation, AI-powered applications are transforming healthcare, education, agriculture, and financial governance, while blockchain-based systems enhance transparency and accountability. In addition, disaster management frameworks integrating big data, hydro-informatics, and real-time monitoring are redefining preparedness in flood-prone regions. Collectively, these works showcase the breadth of AI-enhanced computing as a catalyst for systemic digital transformation, shaping a smarter, more sustainable, and interconnected future.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1851 Real-time Read and Analysis of Air Pollution Produced from Private Electrical Generators in Mosul City using LoRaWAN 2025-05-28T15:24:07+08:00 Mahmood Alfathe mahmood.alfathe@uoninevah.edu.iq Awfa Aladwani awfa.aladwani@uomosul.edu.iq Ali Abduljabbar ali.abduljabbar@uoninevah.edu.iq <p>This study presents a novel, site-specific deployment of a Long Range Wide Area Network (LoRaWAN)-driven air pollution monitoring network specifically for the Iraqi city of Mosul, which is beset by widespread power outages and extensive utilization of decentralized diesel generators. While these generators mitigate electricity shortages, they are enormous contributors to urban air pollution, emitting high levels of CO<sub>2</sub> and particulates. As opposed to previous studies, which concentrate on affluent urban areas, this research addresses a very deprived locale using an extensible low-power, low-cost LoRaWAN network and high-precision CO<sub>2</sub> sensors (Sensirion SCD30 and MH-Z19) and The Things Network (TTN) for real-time data aggregation. With geo-referenced generator mapping integrated into the system, systematically distributed sensor nodes, and spatial interpolation via Geographic Information System (GIS), the system acquires seasonally varying emissions and identifies hotspots of pollution. Temperature and humidity data are incorporated to calibrate sensors so that the emission models are improved. Furthermore, the study conducts an operational evaluation of the LoRaWAN network over Mosul's urban densification, investigating link stability, RSSI, latency, and packet loss to verify network performance in actual conditions. The results highlight strong seasonal correlation between generator working and CO<sub>2</sub> flux, reinforcing the climate-energy-emission nexus. Practically, LoRaWAN's infrastructure-independent and long-range design would be particularly apt to Mosul's connectivity-deficient terrain, serving as a robust platform for environmental monitoring and planning regulation. This research makes a significant contribution to the field by proposing an open, reproducible IoT-based framework for urban air quality control in energy-constrained regions and outlines future directions encompassing multi-pollutant sensing, mobile sensor nodes, and blockchain-secured data communication for enhanced trust and system reliability.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/1903 A Scoping Review of Artificial Intelligence Research Trends in Mobile Applications 2025-05-24T18:26:27+08:00 Asmau Usman asmee08@gmail.com Ibrahim Anka Salihu ibrahim.salihu@nileuniversity.edu.ng Abdulrahman Aminu Ghali aminu@utar.edu.my Bilkisu Larai Muhammad Bello bilkisu.muhammad-bel@nileuniversity.edu.ng Aminu Aminu Mu'azu aminu.aminu@umyu.edu.ng Amina Sambo Magaji amina.magaji@gmail.com <p>Over the past decade, mobile devices have become an integral part of our daily routines, offering a broad spectrum of applications that enhance everyday tasks. As more people adopt smartphones, developers are increasingly focusing on improving app quality, particularly by incorporating artificial intelligence (AI) features. This growing trend has led to a surge of interest from both researchers and industry experts, who are aiming to explore AI integration in sectors such as healthcare, education, agriculture, and e-commerce. This study conducts a thorough review of AI applications on mobile platforms by analysing 98 scholarly articles published between 2014 and 2024 from databases including Scopus, IEEE Explore, and Science Direct. After screening for relevance, 50 articles were selected for in-depth evaluation. The findings show a significant emphasis on healthcare, which accounted for 38% of the reviewed studies, followed by agriculture at 30% and education at 18%. This advancement is in line with societal demands because AI-powered mobile apps improve vital industries like healthcare, agriculture, education, and corporate operations by offering predictive analytics. Notably, machine learning (ML) techniques were prominent, used in 66% of the articles, while deep learning (DL) appeared in 16%. The review also highlights convolutional neural networks (CNN) as a key algorithm, present in 56% of the studies. These insights demonstrate the profound influence of AI on mobile app development and point to emerging trends and future research opportunities in this field. The need for cross-platform AI development has increased dramatically as AI continues to transform mobile technology. This strategy is essential to the scalability, accessibility, and effectiveness of the larger mobile app ecosystem since AI-enabled apps are designed to function flawlessly across a variety of mobile operating systems (iOS, Android, etc.).</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/2144 Impact of Data Quality Types on Computational Time in Data Source Selection Using Ant Colony Optimization 2025-07-11T16:52:51+08:00 Nor Amalina Mohd Sabri noramalina@uthm.edu.my Abd Samad Hasan Basari abdsamad@uthm.edu.my Nurul Akmar Emran nurulakmar@utem.edu.my <p>Data quality varies dramatically from source to source, even within the same domain. Given these challenges, data source selection has emerged as a crucial step in information integration. It demands efficient and scalable approaches that can handle massive data volumes while ensuring the quality of results. Adapting the ACO algorithm to solve the data sources selection problems may lead to inconsistent computational time if the data sources provided are vary in quality. These challenges bring the issues of time consuming in selecting the required data sources. However, how much the computational time needed in solving the data sources selection is depending on the type of data quality. Hence, in this article, the impact of quality type of data towards computational time is examined in solving the data sources selection problems. For the methodology used, there are five steps need to be followed which are first collect data set, second import the data sources to the data sources selection model, third implement the ACO algorithm, fourth obtain the computational time and lastly compare the results. The experiment shows that low-quality data set achieve higher computational time compared to the high-quality data set which achieve the minimum computational time by 3.38 % faster. The results obtained in this experiment shown that the quality type of data has given an impact to the computational time of ACO algorithm. The results also clearly show the contribution of high-quality data set in minimizing computational time in the selection process. The validation on quality type of data with computational time is to clarify the importance of selecting a good quality data to save the computational time.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/2107 Lightweight String Similarity Approaches for Duplicate Detection in Academic Titles 2025-07-19T11:00:07+08:00 Fahrudin Mukti Wibowo fahrudinw@telkomuniversity.ac.id Muhammad Zidny Nafan muhammadn@telkomuniversity.ac.id Muhamad Azrino Gustalika azrino@telkomunivesity.ac.id Harinda Fernando harinda.f@sliit.lk Muhammad Hussain mh.pirzada@usindh.edu.pk Nur Afiqah Sahadun nurafiqah@uthm.edu.my <p>This study addresses the critical challenge of detecting duplicate final year project (FYP) titles in academic institutions, where minor variations like reordering, synonyms, and paraphrasing often obscure plagiarism. We systematically evaluate four string similarity algorithms - Jaro-Winkler, Levenshtein Edit Distance, TF-IDF with Cosine Similarity, and Jaccard Similarity - using a synthetic dataset of 250 title pairs representing common duplication patterns. Our experiments reveal that character-based methods (Jaro-Winkler and Edit Distance) achieve perfect detection (F1-score=1.0) for literal matches, including typographical variations and phrase reordering. At the same time, TF-IDF demonstrates strong semantic capability (F1-score=0.95), albeit with some false positives. Jaccard Similarity performs poorly (Recall=0.40) due to its inability to handle paraphrased content. The analysis of score distributions show a clear separation between duplicates and non-duplicates for character-based approaches, compared to significant overlap in set-based methods. Based on these findings, we propose a practical two-stage screening framework: initial high-confidence filtering using Jaro-Winkler (threshold&gt;0.9) followed by semantic validation with TF-IDF (threshold&gt;0.8). This hybrid approach offers institutions an effective balance between accuracy and computational efficiency for title screening. This study contributes by demonstrating how existing string similarity techniques can be orchestrated into a lightweight, two-stage screening framework tailored for academic title duplication, balancing accuracy with deployment feasibility in institutional settings. Future work should explore multilingual extensions and validation with real-world title datasets to further enhance the practical applicability of these findings.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/2169 Improving Code Effectiveness Through Refactoring: A Case Study 2025-07-15T16:48:11+08:00 Abdullah Almogahed abdullahm@uthm.edu.my Manal Othman manal.oshari@taiz.edu.ye Mazni Omar mazni@uum.edu.my Samera Obaid Barraood sammorahobaid@gmail.com Abdul Rehman Gilal rehman_gilal33@yahoo.com <p>Software refactoring is a crucial practice in modern software development methodologies, such as Agile and DevOps, as it enables teams to iteratively improve and evolve their codebases while minimizing the risk of introducing bugs or regressions. It fosters a culture of continuous improvement and code hygiene, ultimately leading to more robust, maintainable, and scalable software systems. However, research examining the impact of refactoring on code effectiveness is scarce. This study, therefore, seeks to investigate the impact of refactoring methods on the code’s effectiveness. The study was carried out in four distinct phases: refactoring methods selection, case study selection, software metric selection for evaluating the effectiveness of the code, and refactoring methods implementation. The five most prevalent refactoring methods (Extract Subclass, Extract Class, Introduce Parameter Object, Extract Method, and Move Method) were chosen and implemented in the jHotDraw case study. The refactoring methods were implemented 86 times across five experiments in the jHotDraw case study. The results indicate that Extract Subclass, Extract Class, and Introduce Parameter Object have a significant positive impact on code effectiveness, while Extract Method and Move Method do not affect code effectiveness. Practitioners and software designers can utilize this knowledge to make informed assessments regarding refactoring methods and produce software systems that are more reliable and effective.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/2201 Societies’ Funds Management System Using Blockchain 2025-08-20T05:56:56+08:00 Jun Xiang Lau jxlau10@gmail.com Nur Ziadah Harun nurziadah@uthm.edu.my Suziyanti Marjudi suziyanti@uthm.edu.my Abd Samad Hasan Basari abdsamad@uthm.edu.my Nur Amlya Abd Majid amlya@usim.edu.my Roziyani Setik roziyani@unisel.edu.my <p>Lack of transparency of the funds and lack of immutability of the funds’ records are large problems today, especially in the societies’ funds. Leveraging the Ethereum Blockchain, the system ensures complete transparency and security in recording and accessing financial transactions for any society. Advanced encryption and blockchain consensus protocols guarantee data privacy and resilience against fraud or tampering. The information entered in the system helps the treasurer to effectively manage funds and accurately and transparently pass appropriate financial transactions. This project meets the challenges of transparency and immutability in society’s fund records, providing an assurance system and promoting growth for the community's financial integrity. According to the iterative development model, the major components are user login, user management, payment status monitor, and funds history. A solution for a live transparent platform for their team members to have access to their financial data to build trust and get them involved. The system organizes the administrative tasks of bookkeeping, enabling the treasurer to be financially secure while maintaining trackable transactions. The key module of this system includes user login, user management, payment status tracking, and funds history. In the end, this project solves the stated problems by providing a safe path for growth and financial transparency.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/2003 Implementation of Conjugate Gradient Method for Estimating Inflation Rate in Malaysia 2025-06-21T23:01:13+08:00 Shin Yi Wong wong.yi@graduate.utm.my Nur Syarafina Mohamed nursyarafina@utm.my Norhaslinda Zullpakkal lindazullpakkal@uitm.edu.my <p>Optimization methods are valuable for making decisions and identifying the most suitable alternative based on a given objective function. One of the mathematical optimization methods is Conjugate Gradient (CG) method which is commonly used to solve large-scale unconstrained optimization systems with less storage space. Recently, various optimization methods have been studied and used in economics estimating. However, just a few studies have predicted inflation rate using modified CG method. Random initial points are tested on New Three-Terms (NTT) which are modified Rivaie-Mustafa-Ismail-Leong (RMIL+) and Umar Mustapha Waziri (UMW) CG method with ten optimization test functions suggested by Andrei using MATLAB. NOI and CPU time obtained are compared by performance ratio of Dolan and Moré. NTT CG method stands out as the best performance. Data set of year 2010 until 2022 from Department of Statistics Malaysia (DOSM) is transformed into optimization problems to be solved. Estimated results of Least Square Conjugate Gradient (LSCG) are based on NTT CG and LS both for linear and quadratic models. Relative errors for LSCG, Least Square (LS) and Trendline Method are calculated. Linear LS is shown as the most suitable to estimator in inflation rate in Malaysia as it yields the least relative error compatible with the linear LSCG and Trendline Method that produce similar relative error in estimating inflation rate in Malaysia.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/2002 Conjugate Gradient Methods in Fitting Precipitation of Rainfall Data in Malaysia 2025-07-07T20:43:28+08:00 Hua Ru Tang tanghuaru@graduate.utm.my Nur Syarafina Mohamed nursyarafina@utm.my Nurul Hajar Mohd Yusoff nurulhajar@utem.edu.my Norhaslinda Zullpakkal lindazullpakkal@uitm.edu.my <p>Conjugate gradient method (CGM) is one of the most efficient numerical methods for solving unconstrained optimization problems. It is also known as an iterative method with simple formulation. The classical CGM has always been an interest to the current researchers in improving the formulation which are categorized into three-term (TTCGM), spectral (SCGM), hybrid and scaled CGM. Although there are many variations of the CGM available, choosing the most efficient and effective one for a particular problem can be a time-consuming process. In this study, spectral Hestenes-Stiefel (sHS) CGM with the greatest NOI and central processing time per unit (CPU time) is selected as the efficient method to be applied to the real-life problems in regression analysis. A data set of rainfall precipitation in Malaysia from year 2009 until 2019 is collected for data fitting purpose. The data set is transformed into a test function also defined as objective function. The approximate functions are generated from CG, Least Square, Trendline method for the relative error purpose. The estimation data for the year 2020 can be predicted using the approximate functions. The calculation of relative error of the linear and quadratic model for each method is calculated based on the estimation data for the year 2020 and its actual data. The numerical results show that the sHS CGM is a suitable and good alternative to solve the Least Square models.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering https://journals.mmupress.com/index.php/jiwe/article/view/2167 Enhancing Imbalanced Data Augmentation: A Comparative Study of GANified-SMOTE and Latent Factor Integration 2025-08-17T21:12:47+08:00 Rusma Anieza Ruslan rusmaanieza17@gmail.com Nureize Arbaiy nureize@uthm.edu.my Pei-Chun Lin peiclin@fcu.edu.tw <p>One such serious problem in machine learning (ML) is imbalanced datasets. Minority class samples are usually sparse but hold significant meaning. The model can become biased toward the majority class due to unbalanced class distribution. This results in fraudulently high accuracy without being able to detect minority cases. This bias is also most perilous in critical applications, where ignoring minority cases can be highly destructive. To overcome this problem, the Synthetic Minority Oversampling Technique (SMOTE) is one of the most widely used. SMOTE creates balanced class distribution by interpolating between existing minority samples. It creates samples that are too close to one another and can lead to overfitting and limit the generalization of the model. Recent advancements in generative modeling, especially Generative Adversarial Networks (GANs), offer a more effective solution to handle class imbalance. GANs utilizes a generative discriminator structure to produce synthetic data highly similar to real data. A hybrid technique called GANified-SMOTE combines the power of SMOTE with the generation power of GANs to produce more diverse and realistic minority class samples. The technique improves the model strength and eliminates the limitations of traditional oversampling. This paper presents the incorporation of latent factors into the architecture of GANified-SMOTE framework. Latent variables reveal hidden structures and relations in the data, leading to a closer synthetic sample and improving classification accuracy. By incorporating latent factors, this research aims to build a better oversampling method for imbalanced classification sets.</p> 2025-10-14T00:00:00+08:00 Copyright (c) 2025 Journal of Informatics and Web Engineering