A New Bw Index for Quantifying Scholars' Research Influence
Main Article Content
Abstract
The growing importance of measuring and evaluating academic performance in academic hiring, promotions, funding allocation, and resource distribution has fueled the demand for better metrics. Traditional ranking indicators, such as publication count and citation-based indices, often fail to capture for the interdisciplinary influence and qualitative dimensions of research impact. These shortcomings highlight the need for more comprehensive evaluation metrics. The current study introduces a novel BW Index, which integrates both quantitative and qualitative aspects of researcher contributions aiming to provide a more balanced and comprehensive evaluation of scholarly impact. For evaluating the effectiveness of proposed index, a comparative analysis was conducted on 200 researchers' profiles of Monash University Australia calculating both the h-index and the proposed BW Index. The results of study indicate that researchers with identical h-index exhibit significant variation in BW Index values ranging from 10 to 55, demonstrating its ability to distinguish research impact beyond citation counts. Furthermore, for researchers with an h-index of 20, the BW Index ranges from 20 to 82, reflecting an increase in differentiation compared to traditional h index. These findings highlight the BW Index as a more nuanced and equitable measure of academic influence, offering a refined approach to researcher evaluation and addresses the limitations of traditional metrics.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
All articles published in JIWE are licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License. Readers are allowed to
- Share — copy and redistribute the material in any medium or format under the following conditions:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use;
- NonCommercial — You may not use the material for commercial purposes;
- NoDerivatives — If you remix, transform, or build upon the material, you may not distribute the modified material.
References
J. Stallings et al., “Determining scientific impact using a collaboration index,” Proceedings of the National Academy of Sciences of the United States of America, vol. 110, no. 24, pp. 9680–9685, 2013. DOI: 10.1073/pnas.1220184110.
B. Ahmed, L. Wang, W. Hussain, G. Mustafa, and M. T. Afzal, “Investigating scholarly indices and their contribution to recognition patterns among awarded and non-awarded researchers,” International Journal of Data Science and Analytics, pp. 1–18, 2025.
V. Vavrycuk, “Fair ranking of researchers and research teams,” PLoS One, vol. 13, no. 4, p. e0195509, 2018. DOI: 10.1371/journal.pone.0195509.
J. Ding, C. Liu, and G. A. Kandonga, “Exploring the limitations of the h-index and h-type indexes in measuring the research performance of authors,” Scientometrics, vol. 122, pp. 1303–1322, 2020. DOI: 10.1007/s11192-020-03364-1.
J. E. Hirsch, “An index to quantify an individual’s scientific research output,” Proceedings of the National Academy of Sciences of the United States of America, vol. 102, no. 46, pp. 16569–16572, 2005. DOI: 10.1073/pnas.0507655102.
Y. Tzitzikas and G. Dovas, “How co-authorship affects the H-index?,” Scientometrics, vol. 129, no. 7, pp. 4437–4469, 2024. DOI: 10.1007/s11192-024-05088-y.
A. Norouzi, P. Parsaei-Mohammadi, F. Zare-Farashbandi, E. Zare-Farashbandi, and E. Geraei, “H-index and research evaluation: A suggested set of components for developing a comprehensive author-level index,” Journal of Information Science, p. 01655515241293761, 2024. DOI: 10.1177/01655515241293761.
G. J. Rainone, J. G. Nugent, M. Yeradi, S. Ramanathan, and B. C. Lega, “bibliometric analysis and applications of a modified H-index examining the research productivity of neurosurgery faculty at high-ranking academic institutions,” World Neurosurgery, vol. 181, pp. e925–e937, 2024. DOI: 10.1016/j.wneu.2023.11.015.
B. AHMED et al., “Machine learning approach for effective ranking of researcher assessment parameters,” 2023.
B. Ahmed, W. Li, G. Mustafa, M. T. Afzal, S. Z. Alharthi, and A. Akhunzada, “Evaluating the effectiveness of author-count based metrics in measuring scientific contributions,” IEEE Access, 2023.
G. Mustafa, A. Rauf, B. Ahmed, M. T. Afzal, A. Akhunzada, and S. Z. Alharthi, “Comprehensive evaluation of publication and citation metrics for quantifying scholarly influence,” IEEE Access, 2023.
G. Mustafa et al., “Exploring the significance of publication-gge-based parameters for evaluating researcher impact,” IEEE Access, 2023.
A. Bihari, S. Tripathi, and A. Deepak, “A review on h-index and its alternative indices,” Journal of Information Science, vol. 49, no. 3, pp. 624–665, 2023. DOI: 10.1177/01655515211014478.
J. Mingers, K. Watson, and M. P. Scaparra, “Estimating business and management journal quality from the 2008 Research Assessment Exercise in the UK,” Information Processing & Management, vol. 48, no. 6, pp. 1078–1093, 2012. DOI: 10.1016/j.ipm.2012.01.008.
R. Tol, “The h-index and its alternatives: An application to the 100 most prolific economists,” Scientometrics, vol. 80, no. 2, pp. 317–324, 2009. DOI: 10.1007/s11192-008-2079-7.
J. Panaretos and C. Malesios, “Assessing scientific research performance and impact with single indices,” Scientometrics, vol. 81, pp. 635–670, 2009. DOI: 10.1007/s11192-008-2174-9.
B. Ahmed and L. Wang, “Discretization based framework to improve the recommendation quality.,” Int. Arab International Arab Journal of Information Technology, vol. 18, no. 3, pp. 365–371, 2021.
A. Cortegiani, G. Catalisano, and A. Manca, “Predatory Journals and Conferences,” Integrative Science in Research Fraud, Misconduct, and Fake News in Academic Medicine and Social Environment, pp. 501–508, 2022.
B. Ahmed, L. Wang, M. Amjad, W. Hussain, S. Badar-ud-Duja, and M. Abdul, “Deep learning innovations in recommender systems,” International Journal of Computer Applications, vol. 178, no. 12, pp. 57–59, 2019, doi: 10.5120/ijca2019918882.
R. Lukman, D. Krajnc, and P. Glavic, “University ranking using research, educational and environmental indicators,” Journal of Cleaner Production, vol. 18, no. 7, pp. 619–628, 2010. DOI: 10.1016/j.jclepro.2009.09.015.
A. Hussain, Q. Yan, M. A. Q. Bilal, K. Wu, Z. Zhao, and B. Ahmed, “Region-wise ranking for one-day international (ODI) cricket teams,” International Journal of Advanced Computer Science and Applications, vol. 10, no. 10, 2019.
C. V Fry, J. Lynham, and S. Tran, “Ranking researchers: Evidence from Indonesia,” Research Policy, vol. 52, no. 5, p. 104753, 2023. DOI: 10.1002/leap.1561.
J. Knight, “Academic impact rankings of neurosurgical units in the UK and Ireland, as assessed with the h-index,” British Journal of Neurosurgery, vol. 29, no. 5, pp. 637–643, 2015. DOI: https://doi.org/10.1016/j.wneu.2021.06.115.
A. A. Bidgoli, S. Rahnamayan, S. Mahdavi, and K. Deb, “A novel pareto-vikor index for ranking scientists’ publication impacts: a case study on evolutionary computation researchers,” in 2019 IEEE Congress on Evolutionary Computation (CEC), 2019, pp. 2458–2465. DOI: 10.1109/cec.2019.8790104.
A. Agarwal et al., “Bibliometrics: tracking research impact by selecting the appropriate metrics,” Asian Journal of Andrology, vol. 18, no. 2, p. 296, 2016. DOI: 10.4103/1008-682x.171582.
Q. L. Burrell, “On the h-index, the size of the Hirsch core and Jin’s A-index,” Journal of Informetrics, vol. 1, no. 2, pp. 170–177, 2007. DOI: 10.1016/j.joi.2007.01.003.
D. K. Sanyal, S. Dey, and P. P. Das, “gm-index: A new mentorship index for researchers,” Scientometrics, vol. 123, no. 1, pp. 71–102, 2020. DOI: 10.1007/s11192-020-03384-x.
C. H. Sekercioglu, “Quantifying coauthor contributions,” Science vol. 322, no. 5900, p. 371, 2008. DOI:10.1126/science.322.5900.371a.
L. Bornmann, R. Mutz, S. E. Hug, and H.-D. Daniel, “A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants,” Journal of Informetrics, vol. 5, no. 3, pp. 346–359, 2011. DOI: 10.1016/j.joi.2011.01.006.
H. H. Lathabai and T. Prabhakaran, “Contextual Psi index and its estimate for contextual productivity assessment,” Scientometrics, pp. 1–12, 2023. DOI: 10.1177/01655515241293789.
H. H. Bi, “Four problems of the h-index for assessing the research productivity and impact of individual authors,” Scientometrics, vol. 128, no. 5, pp. 2677–2691, 2023. DOI: 10.1007/s11192-022-04323-8.
P. Khurana and K. Sharma, “Impact of h-index on author’s rankings: an improvement to the h-index for lower-ranked authors,” Scientometrics, vol. 127, no. 8, pp. 4483–4498, 2022. DOI: 10.1007/s11192-022-04464-w.
F. L. da Silva and L. C. Brandao, “Stability discussions on some h-type indexes,” Journal of Scientometric Research, vol. 10, no. 1, 2021. DOI: 10.5530/jscires.10.1.2.
S. Ayaz and N. Masood, “Comparison of researchers’ impact indices,” PLoS One, vol. 15, no. 5, p. e0233765, 2020. DOI: 10.1371/journal.pone.0233765.
K. Sharma and Z. Uddin, “Measuring the continuous research impact of a researcher: The Kz index,” arXiv Preprint arXiv2306.15677, 2023. DOI: 10.48550/arXiv.2306.15677
J. Testa, “The Thomson Reuters journal selection process,” Transnational Corporations Review, vol. 1, no. 4, pp. 59–66, 2009. DOI: 10.1080/19186444.2009.11658213.
A. Krampl, “Journal citation reports,” Journal of the Medical Library Association, vol. 107, no. 2, p. 280, 2019.
I. Vlase and T. Lahdesmaki, “A bibliometric analysis of cultural heritage research in the humanities: The Web of Science as a tool of knowledge management,” Humanities and Social Sciences Communications, vol. 10, no. 1, pp. 1–14, 2023. DOI: 10.1057/s41599-023-01582-5.
L. Bornmann and R. Williams, “Can the journal impact factor be used as a criterion for the selection of junior researchers? A large-scale empirical study based on ResearcherID data,” Journal of Informetrics, vol. 11, no. 3, pp. 788–799, 2017. DOI: 10.1016/j.joi.2017.06.001.
W. Liu, G. Hu, and M. Gu, “The probability of publishing in first-quartile journals,” Scientometrics, vol. 106, pp. 1273–1276, 2016. DOI: 10.1007/s11192-015-1821-1.
I. Shehatta, A. M. Al-Rubaish, and I. U. Qureshi, “Coronavirus research performance across journal quartiles. Advantages of Q1 publications,” Global Knowledge, Memory and Communication, no. ahead-of-print, 2022. DOI: 10.7759/cureus.7357.
P. Kungas, S. Karus, S. Vakulenko, M. Dumas, C. Parra, and F. Casati, “Reverse-engineering conference rankings: what does it take to make a reputable conference?,” Scientometrics, vol. 96, pp. 651–665, 2013.
W. Martins, M. Goncalves, A. Laender, and N. Ziviani, “Assessing the quality of scientific conferences based on bibliographic citations,” Scientometrics, vol. 83, no. 1, pp. 133–155, 2010. DOI: 10.1007/s11192-009-0078-y.
A. A. Goodrum, K. W. McCain, S. Lawrence, and C. L. Giles, “Scholarly publishing in the Internet age: a citation analysis of computer science literature,” Information Processing & Management, vol. 37, no. 5, pp. 661–675, 2001. DOI: 10.1016/s0306-4573(00)00047-9.
X. Li, W. Rong, H. Shi, J. Tang, and Z. Xiong, “The impact of conference ranking systems in computer science: A comparative regression analysis,” Scientometrics, vol. 116, pp. 879–907, 2018. DOI: 10.1007/s11192-018-2763-1
R. Lister and I. Box, “A citation analysis of the ACE2005-2007 proceedings, with reference to the June 2007 CORE conference and journal rankings,” in Conferences in Research and Practice in Information Technology Series, 2008.