Enhancing LLM Efficiency: A Literature Review of Emerging Prompt Optimization Strategies
Main Article Content
Abstract
This study focuses on enhancing the performance of Large Language Models (LLMs) through innovative prompt engineering techniques aimed at optimizing outputs without the high computational costs of model fine-tuning or retraining. The primary objective is to investigate efficient alternatives, such as black-box prompt optimization and ontology-based prompt refinement, which improve LLM performance by refining prompts externally while maintaining the model's internal parameters. The study explores various prompt optimization techniques, including instruction-based, role-based, question-answering, and contextual prompting, alongside advanced methods like CoT and ToT prompting. Methodologically, the research involves a comprehensive literature review, benchmarking prompt optimization techniques against existing models using standard datasets such as Big-Bench Hard and GSM8K. The study evaluates the performance of approaches like APE, PromptAgent, self-consistency prompting, and many more. The results demonstrate that these techniques significantly enhance LLM performance, particularly in tasks requiring complex reasoning, multi-step problem-solving, and domain-specific knowledge integration. The findings suggest that prompt engineering is crucial for improving LLM efficiency without excessive resource demands. However, challenges remain in ensuring prompt scalability, transferability, and generalization across different models and tasks. The study highlights the need for further research on integrating ontologies and automated prompt generation to refine LLM precision and adaptability, particularly in low-resource settings. These advancements will be vital for maximizing the utility of LLMs in increasingly complex and diverse applications.
Manuscript received: 3 Oct 2024 | Revised: 13 Dec 2024 | Accepted: 25 Dec 2024 | Published: 31 Mar 2025
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
J. He, M. Rungta, D. Koleczek, A. Sekhon, F. X. Wang, and S. Hasan, “Does Prompt Formatting Have Any Impact on LLM Performance?,” arXiv preprint arXiv:2411.10541, 2024.
DOI: http://arxiv.org/abs/2411.10541
C. Tang, Z. Wang, and Y. Wu, “Large Language Models Might Not Care What You Are Saying: Prompt Format Beats Descriptions,” arXiv preprint arXiv: 2408.08780, 2024.
DOI: http://arxiv.org/abs/2408.08780
D. Sulimov, “Prompt-Efficient Fine-Tuning for GPT-like Deep Models to Reduce Hallucination and to Improve Reproducibility in Scientific Text Generation Using Stochastic Optimisation Techniques,” arXiv preprint arXiv: 2411.06445, 2024.
DOI: http://arxiv.org/abs/2411.06445
T. Alhindi, T. Chakrabarty, E. Musi and S. Muresan, “Multitask Instruction-based Prompting for Fallacy Recognition,” Conference on Empirical Methods in Natural Language Processing, vol. 2022-Dec, pp. 8172–8187, 2022.
DOI: http://arxiv.org/abs/2301.09992
J. Wei et al., “Finetuned Language Models Are Zero-Shot Learners,” Tenth International Conference on Learning Representations, 2021.
DOI: https://doi.org/10.48550/arXiv.2109.01652
A. Kong et al., “Better Zero-Shot Reasoning with Role-Play Prompting,” Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 4099–4113, 2024.
DOI: https://doi.org/10.48550/arXiv.2308.07702
R. Wang, F. Mi, Y. Chen, B. Xue, H. Wang, Q. Zhu, K. Wong and R. Xu, "Role Prompting Guided Domain Adaptation with General Capability Preserve for Large Language Models," arXiv, 2024.
DOI: https://doi.org/10.48550/arXiv.2403.02756
W. Zhou and T.H. Ngo, “Using Pretrained Large Language Model with Prompt Engineering to Answer Biomedical Questions,” Conference and Labs of the Evaluation Forum, 2024.
DOI: https://doi.org/10.48550/arXiv.2407.06779
X. Chen, Y. Zhang, J. Deng, J.-Y. Jiang and W. Wang, “Gotta: Generative Few-shot Question Answering by Prompt-based Cloze Data Augmentation,” Proceedings of the 2023 SIAM International Conference on Data Mining, pp. 909-917, 2023.
DOI: https://doi.org/10.48550/arXiv.2306.04101
W. Zhong, Y. Gao, N. Ding, Y. Qin, Z. Liu, M. Zhou, J. Wang, J. Yin and N. Duan, "ProQA: Structural Prompt-based Pre-training for Unified Question Answering," arXiv, 2022.
DOI: https://doi.org/10.48550/arXiv.2205.04040
S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang and X. Wu, “Unifying Large Language Models and Knowledge Graphs: A Roadmap,” IEEE Transactions on Knowledge and Data Engineering, vol. 36, pp. 3580-3599, 2023.
DOI: https://doi.org/10.1109/TKDE.2024.3352100
K. Vasisht, B. Ganesan, V. Kumar and V. Bhatnagar, “Infusing Knowledge into Large Language Models with Contextual Prompts,” Proceedings of the 20th International Conference on Natural Language Processing, pp. 657–662, 2024.
DOI: https://doi.org/10.48550/arXiv.2403.01481
S. Swamy, N. Tabari, C. Chen and R. Gangadharaiah, “Contextual Dynamic Prompting for Response Generation in Task-oriented Dialog Systems,” Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, vol. 17, pp. 3102–3111, 2023.
DOI: https://doi.org/10.48550/arXiv.2301.13268
T. Kojima, S. Shane Gu, M. Reid Google Research, Y. Matsuo and Y. Iwasawa, “Large Language Models are Zero-Shot Reasoners,” Conference on Neural Information Processing Systems, 2023.
DOI: http://dx.doi.org/10.48550/arXiv.2205.11916
T.B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D.M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever and D. Amodei, "Language Models are Few-Shot Learners," arXiv, 2020.
DOI: https://doi.org/10.48550/arXiv.2005.14165
A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H.W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A.M. Dai, T.S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov and N. Fiedel, "PaLM: Scaling Language Modeling with Pathways," arXiv, 2022.
DOI: https://doi.org/10.48550/arXiv.2204.02311
H.W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, Y. Li, X. Wang, M. Dehghani, S. Brahma, A. Webson, S.S. Gu, Z. Dai, M. Suzgun, X. Chen, A. Chowdhery, A. Castro-Ros, M. Pellat, K. Robinson, D. Valter, S. Narang, G. Mishra, A. Yu, V. Zhao, Y. Huang, A. Dai, H. Yu, S. Petrov, E.H. Chi, J. Dean, J. Devlin, A. Roberts, D. Zhou, Q.V. Le and J. Wei, "Scaling Instruction-Finetuned Language Models," arXiv, 2022.
DOI: https://doi.org/10.48550/arXiv.2210.11416
OpenAI et al., “GPT-4 Technical Report,” arXiv, vol. abs/2303.08774, 2023.
DOI: http://dx.doi.org/10.48550/arXiv.2303.08774
S. Utpala, C. For, S. Hooker and P.Y. Chen, “Locally Differentially Private Document Generation Using Zero Shot Prompting,” The 2023 Conference on Empirical Methods in Natural Language Processing, 2023.
DOI: http://dx.doi.org/10.18653/v1/2023.findings-emnlp.566
S. Sivarajkumar, M. Kelley, A. Samolyk-Mazzanti, S. Visweswaran and Y. Wang, “An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing: Algorithm Development and Validation Study,” JMIR Medical Informatics, vol. 12, 2024.
DOI: http://dx.doi.org/10.2196/55318
S. Sivarajkumar and Y. Wang, “HealthPrompt: A Zero-shot Learning Paradigm for Clinical Natural Language Processing,” AMIA Annual Symposium, vol. 2022, pp. 972-981, 2022.
DOI: http://dx.doi.org/10.48550/arXiv.2203.05061
L. Kovriguina, R. Teucher, D. Radyush and D. Mouromtsev, “SPARQLGEN: One-Shot Prompt-based Approach for SPARQL Query Generation,” AMIA Annual Symposium, vol. 2022, pp. 972-981, 2023.
DOI: http://dx.doi.org/10.1007/978-3-642-31600-5_49
S.-Y. Yoon, “Short Answer Grading Using One-shot Prompting and Text Similarity Scoring Model,” Arxiv, vol. abs/2305.18638, 2023.
DOI: http://dx.doi.org/10.48550/arXiv.2305.18638
A.M. Marasovi´c, I. Beltagy, D. Downey and M.E. Peters, “Few-Shot Self-Rationalization with Natural Language Prompts,” 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics, vol. 2022, pp. 410-424, 2022.
DOI: http://dx.doi.org/10.48550/arXiv.2111.08284
H. Ma, C. Zhang, Y. Bian, L. Liu, Z. Zhang, P. Zhao, S. Zhang, H. Fu, Q. Hu and B. Wu, "Fairness-guided Few-shot Prompting for Large Language Models," arXiv, 2023.
DOI: https://doi.org/10.48550/arXiv.2303.13217
X. Yu, Y. Fang, Z. Liu and X. Zhang, “HGPROMPT: Bridging Homogeneous and Heterogeneous Graphs for Few-shot Prompt Learning,” in Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence, 2024.
DOI: http://dx.doi.org/10.1609/aaai.v38i15.29596
A.V.Y. Lee, C.L. Teo, and S.C. Tan, “Prompt Engineering for Knowledge Creation: Using Chain-of-Thought to Support Students’ Improvable Ideas,” AI, vol. 5, no. 3, pp. 1446–1461, 2024.
DOI: http://dx.doi.org/10.3390/ai5030069
K. Hebenstreit, R. Praas, L.P. Kiesewetter and M. Samwald, “An automatically discovered chain-of-thought prompt generalizes to novel models and datasets,” Arxiv, vol. abs/2305.02897, 2023.
DOI: http://dx.doi.org/10.48550/arXiv.2305.02897
G. Feng, B. Zhang, Y. Gu, H. Ye, D. He and L. Wang, “Towards Revealing the Mystery behind Chain of Thought: A Theoretical Perspective,” 37th Annual Conference on Neural Information Processing Systems, vol. 2305.15408, 2023.
DOI: http://dx.doi.org/10.48550/arXiv.2305.15408
J. Long, “Large Language Model Guided Tree-of-Thought,” Arxiv preprint, vol. 2305.08291, 2023.
DOI: http://dx.doi.org/10.48550/arXiv.2305.08291
S. Yao, D. Yu, J. Zhao, I. Shafran, T.L. Griffiths, Y. Cao and K. Narasimhan, "Tree of Thoughts: Deliberate Problem Solving with Large Language Models," arXiv, 2023.
DOI: https://doi.org/10.48550/arXiv.2305.10601
Z. Bi, D. Hajialigol, Z. Sun, J. Hao and X. Wang, “STOC-TOT: Stochastic Tree-of-Thought with Constrained Decoding for Complex Reasoning in Multi-Hop Question Answering,” Arxiv , vol. abs/2407.03687, 2024.
DOI: http://dx.doi.org/10.48550/arXiv.2407.03687
X. Wang et al., “Self-Consistency Improves Chain of Thought Reasoning in Language Models,” Arxiv, vol. abs/2203.11171, 2023.
DOI: http://arxiv.org/abs/2203.11171
G. Wan, Y. Wu, J. Chen and S. Li, “Dynamic Self-Consistency: Leveraging Reasoning Paths for Efficient LLM Sampling,” Arxiv, vol. abs/ 2408.17017, 2024.
DOI: http://dx.doi.org/10.48550/arXiv.2408.17017
Y. Li, P. Yuan, S. Feng, B. Pan, X. Wang, B. Sun, H. Wang and K. Li, "Escape Sky-high Cost: Early-stopping Self-Consistency for Multi-step Reasoning," arXiv, 2024.
DOI: https://doi.org/10.48550/arXiv.2401.10480
P. Aggarwal, A. Madaan, Y. Yang and Mausam, “Let’s Sample Step by Step: Adaptive-Consistency for Efficient Reasoning and Coding with LLMs,” Conference on Empirical Methods in Natural Language Processing, vol. 2, 2023.
DOI: https://doi.org/10.48550/arXiv.2305.11860
J. Liu, A. Liu, X. Lu, S. Welleck, P. West, R.L. Bras, Y. Choi and H. Hajishirzi, "Generated Knowledge Prompting for Commonsense Reasoning," arXiv, 2021.
DOI: https://doi.org/10.48550/arXiv.2110.08387
R. Xu, H. Cui, Y. Yu, X. Kan, W. Shi, Y. Zhuang, M.D. Wang, W. Jin, J. Ho and C. Yang, "Knowledge-Infused Prompting: Assessing and Advancing Clinical Text Data Generation with Large Language Models," Findings of the Association for Computational Linguistics ACL 2024, pp. 15496-15523, 2024.
DOI: https://doi.org/10.18653/v1/2024.findings-acl.916
Y. Meng, J. Huang, Y. Zhang and J. Han, “Generating Training Data with Language Models: Towards Zero-Shot Language Understanding,” Conference on Neural Information Processing Systems, vol. 35, pp. 462-477, 2022.
DOI: http://arxiv.org/abs/2202.04538
Y. Meng, M. Michalski, J. Huang, Y. Zhang, T. Abdelzaher and J. Han, “Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning,” Proceedings of the 40th International Conference on Machine Learning, 2022.
DOI: http://arxiv.org/abs/2211.03044
K.M. Yoo, D. Park, J. Kang, S.-W. Lee and W. Park, “GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation,” Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, vol. EMNLP 2021, pp. 2225–2239, 2021.
DOI: http://arxiv.org/abs/2104.08826
H. Zeng, B. Wei, J. Liu and W. Fu, “Synthesize, Prompt and Transfer: Zero-shot Conversational Question Generation with Pre-trained Language Model,” Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 8989–9010, 2023.
DOI: https://doi.org/10.18653/v1/2023.acl-long.500
B.M. Lake and M. Baroni, “Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks,” Proceedings of the 35th International Conference on Machine Learning, 2017.
DOI: http://arxiv.org/abs/1711.00350
J. Ouyang, S. Liang, S. Chen, S. Li, Y. Zhou and Q. Liwen, "Design and Realization of Data Application Architecture Oriented to the Requirements of Distribution Network," 2020 IEEE Sustainable Power and Energy Conference (iSPEC), pp. 2354-2359, 2020.
DOI: https://doi.org/10.1109/iSPEC50848.2020.9351123
D. Zhou, N. Scharli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, C. Cui, O. Bousquet, Q. Le, and E. Chi., “Least-to-Most Prompting Enables Complex Reasoning in Large Language Models,” The Eleventh International Conference on Learning Representations, 2023.
DOI: http://arxiv.org/abs/2205.10625
Y. Yao, Z. Li and H. Zhao, “Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Language Models,” Arxiv, vol. abs/2305.16582, 2023.
DOI: http://arxiv.org/abs/2305.16582
B. Lei, pei-H. Lin, C. Liao and C. Ding, “Boosting Logical Reasoning in Large Language Models through a New Framework: The Graph of Thought,” Arxiv, vol. abs/2308.08614, 2023.
DOI: http://dx.doi.org/10.48550/arXiv.2308.08614
L. Zheng, L. Zheng, H. Fei, F. Li, B. Li, L. Liao, D. Ji, and C. Teng., “Reverse Multi-Choice Dialogue Commonsense Inference with Graph-of-Thought,” Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023.
DOI: http://dx.doi.org/10.1609/aaai.v38i17.29942
O. Ram, Y. Levine, I. Dalmedigos, D. Muhlgay, A. Shashua, K. Leyton-Brown, and Y. Shoham.., “In-Context Retrieval-Augmented Language Models”, Transactions of the Association for Computational Linguistics, vol. 11, pp. 1316-1331, 2023.
DOI: http://dx.doi.org/10.1162/tacl_a_00605
S. Lin, J.Hilton, Openai and O. Evans, “TruthfulQA: Measuring How Models Mimic Human Falsehoods,” Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 3214-3252, 2022.
DOI: http://dx.doi.org/10.48550/arXiv.2109.07958
Z. Guo, S. Cheng, Y. Wang, P. Li and Y. Liu, “Prompt-Guided Retrieval Augmentation for Non-Knowledge-Intensive Tasks,” 61st Annual Meeting of the Association for Computational Linguistics, vol. ACL 2023, pp. 10896–10912, 2023.
DOI: http://dx.doi.org/10.48550/arXiv.2305.17653
G. Izacard and E. Grave, “Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering,” Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, vol. EACL 2021, pp. 874–880, 2021.
DOI: http://dx.doi.org/10.48550/arXiv.2007.01282
P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W. Yih, T. Rocktäschel, S. Riedel and D. Kiela, "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks," arXiv, 2020.
DOI: https://doi.org/10.48550/arXiv.2005.11401
R. Socher et al., “Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank,” Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631–1642, 2013.
DOI: https://aclanthology.org/D13-1170/
A. Warstadt, A. Singh and S.R. Bowman, “Neural Network Acceptability Judgments,” Transactions of the Association for Computational Linguistics, vol. 7, pp. 625–641, 2018.
DOI: https://doi.org/10.48550/arXiv.1805.12471
H. Tan et al., “Prompt-based Code Completion via Multi-Retrieval Augmented Generation,” Arxiv, vol. abs/2405.07530, 2024.
DOI: https://doi.org/10.48550/arXiv.2405.07530
J. Cheng et al., “Black-Box Prompt Optimization: Aligning Large Language Models without Model Training,” Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 3201–3219, 2023.
DOI: https://doi.org/10.18653/v1/2024.acl-long.176
M. Suzgun, N. Scales, N. Schärli, S. Gehrmann, Y. Tay, H.W. Chung, A. Chowdhery, Q. Le, E. Chi, D. Zhou and J. Wei, "Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them," Findings of the Association for Computational Linguistics: ACL 2023, 2023.
DOI: https://doi.org/10.18653/v1/2023.findings-acl.824
K. Cobbe et al., “Training Verifiers to Solve Math Word Problems,” Arxiv, vol. abs/2110.14168, 2021.
DOI: http://arxiv.org/abs/2110.14168
Y. Zhou et al., “Large Language Models Are Human-Level Prompt Engineers,” The Eleventh International Conference on Learning Representations, 2023.
DOI: https://doi.org/10.48550/arXiv.2211.01910
Y. Zhou, A.I. Muresanu, Z. Han, K. Paster, S. Pitis, H. Chan and J. Ba, "Large Language Models Are Human-Level Prompt Engineers," arXiv, 2022.
DOI: https://doi.org/10.48550/arXiv.2211.01910
C. Yang, X. Wang, Y. Lu, H. Liu, Q.V. Le, D. Zhou and X. Chen, "Large Language Models as Optimizers," arXiv, 2023.
DOI: https://doi.org/10.48550/arXiv.2309.03409
H. Sun, X. Li, Y. Xu, Y. Homma, Q. Cao, M. Wu, J. Jiao and D. Charles, "AutoHint: Automatic Prompt Optimization with Hint Generation," arXiv, 2023.
DOI: https://doi.org/10.48550/arXiv.2307.07415
Z. Zhang, A. Zhang, M. Li, and A. Smola, “Automatic Chain of Thought Prompting in Large Language Models,” The Eleventh International Conference on Learning Representations, 2023.
DOI: https://doi.org/10.48550/arXiv.2210.03493
W. Xu, A. Banburski-Fahey and N. Jojic, “Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling,” 41st International Conference on Machine Learning, vol. 235, pp. 54852-54865, 2023.
DOI: https://doi.org/10.48550/arXiv.2305.09993
Y. Zhou et al., “Large Language Models Are Human-Level Prompt Engineers,” The Eleventh International Conference on Learning Representations, 2023.
DOI: http://arxiv.org/abs/2211.01910
S. Roy and D. Roth, “Solving General Arithmetic Word Problems,” Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1743–1752, 2016.
DOI: https://doi.org/10.18653/v1/D15-1202
R. Pryzant, D. Iter, J. Li, Y.T. Lee, C. Zhu, and M. Zeng, “Automatic Prompt Optimization with ‘Gradient Descent’ and Beam Search,” Conference on Empirical Methods in Natural Language Processing, pp. 7957–7968, 2023.
DOI: https://doi.org/10.48550/arXiv.2305.03495
Y. Ishimizu, J. Li, J. Xu, J. Cai, H. Iba and K. Tei, "Automatic Adaptation Rule Optimization via Large Language Models," 2024 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C), pp. 180-181, 2024.
DOI: https://doi.org/10.1109/ACSOS-C63493.2024.00057
R. Anil et al., “PaLM 2 Technical Report,” Arxiv, 2023, vol. abs/2305.10403, 2023.
DOI: http://arxiv.org/abs/2305.10403
OpenAI et al., “GPT-4 Technical Report,” Arxiv, vol. abs/2303.08774, 2023.
DOI: http://arxiv.org/abs/2303.08774
S. Lin and B.W. Kernighan, “An Effective Heuristic Algorithm for the Traveling-Salesman Problem,” Operations Research, vol. 21, pp. 498-516, 1973.
DOI: https://www.jstor.org/stable/169020
N. P, S. Eliyas, S. K. M and B. Balusamy, "Individualized Mastery Quest: Crafting Customized Question Papers and Dynamic Hint Generation for Personalized Learning Journeys Using Cutting-Edge Rank-Based Algorithm," 2024 International Conference on Electrical Electronics and Computing Technologies (ICEECT), pp. 1-6, 2024.
DOI: https://doi.org/10.1109/ICEECT61758.2024.10739213
T. Kojima, S.S. Gu, M. Reid, Y. Matsuo and Y. Iwasawa, "Large Language Models are Zero-Shot Reasoners," arXiv, 2022.
DOI: https://doi.org/10.48550/arXiv.2205.11916
R. Pryzant, D. Iter, J. Li, Y.T. Lee, C. Zhu, and M. Zeng, “Automatic Prompt Optimization with ‘Gradient Descent’ and Beam Search,” Conference on Empirical Methods in Natural Language Processing, pp. 7957–7968, 2023.
DOI: http://dx.doi.org/10.48550/arXiv.2201.11903
H. Ye et al., “Ontology-enhanced Prompt-tuning for Few-shot Learning,” in WWW 2022 - Proceedings of the ACM Web Conference 2022, Association for Computing Machinery, Inc, pp. 778–787, 2022.
DOI: http://dx.doi.org/10.48550/arXiv.2201.11332
O. Palagin, V. Kaverinsky, A. Litvin and K. Malakhov, “OntoChatGPT Information System: Ontology-Driven Structured Prompts for ChatGPT Meta-Learning,” International Journal of Computing, vol. 22, no. 2, pp. 170–183, 2023.
DOI: http://dx.doi.org/10.48550/arXiv.2307.05082
X. Liu, Y. Zheng, Z. Du, M. Ding, Y. Qian, Z. Yang and J. Tang, "GPT understands, too," AI Open, vol. 5, pp. 208-215, 2024.
DOI: https://doi.org/10.1016/j.aiopen.2023.08.012
Y. Hao, Z. Chi, L. Dong, and F. Wei, “Optimizing Prompts for Text-to-Image Generation,” Proceedings of 36th International Conference on Neural Information Processing Systems, 2022.
DOI: https://doi.org/10.48550/arXiv.2212.09611
M. Too, S. H. Lau, and C. K. Tan, "Validity and reliability of a conceptual framework on enhancing learning for students via Kinect: A pilot test," International Journal on Robotics, Automation and Sciences, vol. 6, no. 1, pp. 59–63, 2024.
DOI:
https://doi.org/10.33093/ijoras.2024.6.1.8
T.-E. Tai, S.-C. Haw, W.-E. Kong, and K.-W. Ng, "Performance evaluation of machine learning techniques on resolution time prediction in helpdesk support system," International Journal on Robotics, Automation and Sciences, vol. 6, no. 2, pp. 59–68, 2024.
DOI: https://doi.org/10.33093/ijoras.2024.6.2.9 0.33093/ijoras.2024.6.2.9
M. Too and R. Chang, "A fundamental study of an alternative learning framework utilizing natural user interface (NUI) for physically disabled students," International Journal on Robotics, Automation and Sciences, vol. 5, no. 1, pp. 1–5, 2023.