Cyber Security Threats of Using Generative Artificial Intelligence in Source Code Management
Main Article Content
Abstract
Generative Artificial Intelligence (Generative AI) models are now broadly used for academic writing and software development for the sake of productivity and efficiency. Concerns on the impact of Artificial Intelligence (AI) tools on academic integrity and cybersecurity grow bigger with time. Generative AI is being used for code generation, editing, and review, raising ethical and security challenges. A big concern is the involuntary introduction of vulnerabilities into codebases. They can reproduce known bugs or malicious code that compromise software integrity because of the way models are trained: on large datasets. The tools may also pose additional security threats often encountered during software development. AI models trained on public data will generate code that resembles copyrighted content, creating ownership and legal grey areas. Use of AI to delegate coding increases potential adversarial attacks and model poisoning. Addressing these challenges would therefore call for a balanced approach towards AI integrating into software development. Secure coding practices, thorough testing, continuous monitoring, and collaboration between developers, security professionals, and AI researchers should be balanced. Strong governance, regular audits, transparency in AI development, and the embedding of ethical standards in AI usage will help in ensuring it is safe and effective. Generative AI should be seen as a tool to enhance, not replace, human expertise in software development. While automation can streamline workflows, developers must remain vigilant to detect and mitigate AI-induced vulnerabilities. A proactive approach that combines human oversight with AI-driven efficiency will be key to securing the future of software development.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
All articles published in JIWE are licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License. Readers are allowed to
- Share — copy and redistribute the material in any medium or format under the following conditions:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use;
- NonCommercial — You may not use the material for commercial purposes;
- NoDerivatives — If you remix, transform, or build upon the material, you may not distribute the modified material.
References
E. Ferrara, “GenAI against humanity: nefarious applications of generative artificial intelligence and large language models,” Journal of Computational Social Science, vol. 7, no. 1, pp. 549–569, Apr. 2024, doi: 10.1007/s42001-024-00250-1.
D. M. Huber and O. Alexy, “The impact of artificial intelligence on strategic leadership,” in Handbook of Research on Strategic Leadership in the Fourth Industrial Revolution. Edward Elgar Publishing, 2024, pp. 108–136. [Online]. Available: https://www.elgaronline.com/edcollchap/book/9781802208818/book-part-9781802208818-12.xml.
N. Inie, J. Falk, and S. Tanimoto, “Designing participatory AI: Creative professionals’ worries and expectations about Generative AI,” in Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. Hamburg, Germany: ACM, Apr. 2023, pp. 1–8, doi: 10.1145/3544549.3585657.
G. W. Choi, S. H. Kim, D. Lee, and J. Moon, “Utilizing Generative AI for instructional design: Exploring strengths, weaknesses, opportunities, and threats,” TechTrends, vol. 68, no. 4, pp. 832–844, Jul. 2024, doi: 10.1007/s11528-024-00967-w.
R. Jain and A. Jain, “Generative AI in writing research papers: A new type of algorithmic bias and uncertainty in scholarly work,” in Intelligent Systems and Applications, vol. 1065, K. Arai, Ed. Cham, Switzerland: Springer, 2024, pp. 656–669, doi: 10.1007/978-3-031-66329-1_42.
H. Hessari, A. Bai, and F. Daneshmandi, “Generative AI: Boosting adaptability and reducing workplace overload,” Journal of Computer Information Systems, pp. 1–14, Oct. 2024, doi: 10.1080/08874417.2024.2417672.
E. Brynjolfsson, D. Li, and L. R. Raymond, “Generative AI at work,” National Bureau of Economic Research, Working Paper 31161, 2023.
K. Greshake et al., “Not what you’ve signed up for: Compromising real-world LLM-integrated applications with indirect prompt injection,” in Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security. Copenhagen, Denmark: ACM, Nov. 2023, pp. 79–90, doi: 10.1145/3605764.3623985.
D. Humphreys et al., “AI hype as a cyber security risk: The moral responsibility of implementing generative AI in business,” AI Ethics, vol. 4, no. 3, pp. 791–804, Aug. 2024, doi: 10.1007/s43681-024-00443-4.
S. Wen, “The power of generative AI in cybersecurity: Opportunities and challenges,” Applied Computer Engineering, vol. 48, pp. 31–39, 2024.
N. Carlini et al., “Poisoning web-scale training datasets is practical,” in 2024 IEEE Symposium on Security and Privacy (SP). IEEE, 2024, pp. 407–425. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/10646610.
I. Agrafiotis et al., “A taxonomy of cyber-harms: Defining the impacts of cyber-attacks and understanding how they propagate,” Journal of Cybersecurity, vol. 4, no. 1, p. tyy006, 2018.
E. M. Renieris, D. Kiron, and S. Mills, “Building robust RAI programs as third-party AI tools proliferate,” MIT Sloan Management Review, 2023. [Online]. Available: https://sloanreview.mit.edu/projects/building-robust-rai-programs-as-third-party-ai-tools-proliferate/.
M. Taddeo, “Information warfare: A philosophical perspective,” in The Ethics of Information Technologies. Routledge, 2020, pp. 461–476. [Online]. Available: https://www.taylorfrancis.com/chapters/edit/10.4324/9781003075011-35/information-warfare-philosophical-perspective-mariarosaria-taddeo.
D. Schatz, R. Bashroush, and J. Wall, “Towards a more representative definition of cyber security,” Journal of Digital Forensics, Security and Law, vol. 12, no. 2, p. 8, 2017.
D.-O. Jaquet-Chiffelle and M. Loi, “Ethical and unethical hacking,” in Ethics in Cybersecurity. Springer, 2020, pp. 179–204.
S. Bukhari, B. Tan, and L. De Carli, “Distinguishing AI- and human-generated code: A case study,” in Proceedings of the 2023 Workshop on Software Supply Chain Offensive Research and Ecosystem Defenses. Copenhagen, Denmark: ACM, Nov. 2023, pp. 17–25, doi: 10.1145/3605770.3625215.
K. Lee, J. Lee, and K. Yim, “Classification and analysis of malicious code detection techniques based on the APT attack,” Applied Sciences, vol. 13, no. 5, p. 2894, 2023.
A. Odeh, N. Odeh, and A. S. Mohammed, “A comparative review of AI techniques for automated code generation in software development: Advancements, challenges, and future directions,” TEM Journal, vol. 13, no. 1, p. 726, 2024.
S. Ness et al., “Anomaly detection in network traffic using advanced machine learning techniques,” IEEE Access, vol. 13, pp. 16133–16149, 2025, doi: 10.1109/ACCESS.2025.3526988.
M. Sallam, “ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns,” Healthcare, vol. 11, no. 6, p. 887, 2023. [Online]. Available: https://www.mdpi.com/2227-9032/11/6/887.
V. Shinde et al., “Ensemble voting for enhanced robustness in DarkNet traffic detection,” IEEE Access, vol. 12, pp. 177064–177079, 2024, doi: 10.1109/ACCESS.2024.3489020.
F. Dell’Acqua et al., “Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality,” Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013, 2023. [Online]. Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321.
T. Shahzad and K. Aman, “Unveiling the efficacy of AI-based algorithms in phishing attack detection,” Journal of Information and Web Engineering, vol. 3, no. 2, pp. 116–133, 2024.
M. M. Rahman and Y. Watanobe, “ChatGPT for education and research: Opportunities, threats, and strategies,” Applied Sciences, vol. 13, no. 9, p. 5783, 2023.
OpenAI et al., “GPT-4o system card,” arXiv:2410.21276, Oct. 2024. [Online]. Available: https://arxiv.org/abs/2410.21276.
M. Gupta et al., “From ChatGPT to ThreatGPT: Impact of Generative AI in cybersecurity and privacy,” IEEE Access, vol. 11, pp. 80218–80245, 2023.
D. Su et al., “GPT store mining and analysis,” arXiv:2405.10210, May 2024. [Online]. Available: https://arxiv.org/abs/2405.10210.
K. M. Caramancion, “An exploration of disinformation as a cybersecurity threat,” in 2020 3rd International Conference on Information and Computer Technologies (ICICT). IEEE, 2020, pp. 440–444, doi: 10.1109/ICICT50521.2020.00076.
B. Edwards, “AI-powered Bing Chat spills its secrets via prompt injection attack,” Ars Technica, 2023.
T. Munusamy and T. Khodadi, “Building cyber resilience: Key factors for enhancing organizational cyber security,” Journal of Information and Web Engineering, vol. 2, no. 2, pp. 59–71, 2023.
L. De Angelis et al., “ChatGPT and the rise of large language models: The new AI-driven infodemic threat in public health,” Frontiers in Public Health, vol. 11, p. 1166120, 2023.
D. Bruschi and N. Diomede, “A framework for assessing AI ethics with applications to cybersecurity,” AI Ethics, vol. 3, no. 1, pp. 65–72, Feb. 2023, doi: 10.1007/s43681-022-00162-8.
M. Cascella et al., “Evaluating the feasibility of ChatGPT in healthcare: An analysis of multiple clinical and research scenarios,” Journal of Medical Systems, vol. 47, no. 1, p. 33, Mar. 2023, doi: 10.1007/s10916-023-01925-4.
R. Klemke and H. Jarodzka, “Locked in Generative AI: The impact of Large Language Models on educational freedom and teacher education,” in Exploring New Horizons in Generative Artificial Intelligence in Teacher Education. Springer, 2024, p. 76.
A. Varma, C. Dawkins, and K. Chaudhuri, “Artificial intelligence and people management: A critical assessment through the ethical lens,” Human Resource Management Review, vol. 33, no. 1, p. 100923, 2023.
S. F. Wamba et al., “Are both generative AI and ChatGPT game changers for 21st-century operations and supply chain excellence?,” International Journal of Production Economics, vol. 265, p. 109015, 2023.
B. C. Stahl and D. Eke, “The ethics of ChatGPT—Exploring the ethical issues of an emerging technology,” International Journal of Information Management, vol. 74, p. 102700, 2024.
R. A. Spinello, “Corporate data breaches: A moral and legal analysis,” Journal of Information Ethics, vol. 30, no. 1, pp. 12–32, 2021.
L. P. Robert et al., “Designing fair AI for managing employees in organizations: A review, critique, and design agenda,” Human–Computer Interaction, vol. 35, no. 5–6, pp. 545–575, Nov. 2020, doi: 10.1080/07370024.2020.1735391.
P. N. Petratos and A. Faccia, “Fake news, misinformation, disinformation and supply chain risks and disruptions: Risk management and resilience using blockchain,” Annals of Operations Research, vol. 327, no. 2, pp. 735–762, Aug. 2023, doi: 10.1007/s10479-023-05242-4.