FACTORS INFLUENCING THE USAGE BEHAVIOR OF LARGE LANGUAGE MODELS TO SUPPORT LEARNING AMONG UNDERGRADUATE STUDENTS IN BANGKOK, THAILAND
Main Article Content
Abstract
This study investigates the causal factors that influence university students’ intention to use, and actual usage of, large language models (LLMs) as learning support tools in Bangkok, Thailand. The research integrates three prominent frameworks—Technology Acceptance Model, Theory of Planned Behavior, and Unified Theory of Acceptance and Use of Technology—into a single comprehensive model. Data were collected from 400 undergraduate students in Bangkok Thailand selected through stratified random sampling. A structured questionnaire using Likert-type rating scales was administered, and the data were analyzed using descriptive statistics, multiple regression analysis, and analysis of variance. The findings indicate that perceived usefulness, perceived ease of use, social influence, perceived behavioral control, and hedonic motivation have significant positive effects on behavioral intention to use LLMs, with perceived usefulness emerging as the strongest predictor. Actual usage behavior is mainly determined by facilitating conditions and habit, with habit exerting the greatest predictive power. Ethical awareness functions as a negative moderating variable that weakens the relationship between perceived usefulness and behavioral intention, whereas artificial intelligence knowledge serves as a positive moderator that strengthens the relationship between perceived ease of use and behavioral intention. The results also show significant differences across years of study, as third- and fourth-year students report more intensive LLM usage than first- and second-year students. Overall, the study suggests that LLMs have become embedded in undergraduate learning routines. It recommends that higher education institutions develop policies, curricula, and support systems that foster digital intelligence, improve facilitating conditions, and provide guidance for responsible and ethical LLM use to protect academic integrity while maximizing educational benefits.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
บัวพรรณ คำเฉลา และคณะ. (2568). การใช้เครื่องมือปัญญาประดิษฐ์ ChatGPT เพื่อส่งเสริมการเรียนรู้แบบมีส่วนร่วมในรายวิชาการเขียนโปรแกรมคอมพิวเตอร์. วารสารกว๊านพะเยา, 2(4), 1-16.
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211.
Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 1-15.
Cochran, W. G. (1977). Sampling techniques. (3rd ed.). New York: John Wiley & Sons.
Cotton, R. E. et al. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT . Innovations in Education and Teaching International, 61(2), 228-239.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297-334.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340.
Grassini, S. et al. (2024). Understanding university students’ acceptance of ChatGPT: Insights from the UTAUT2 model. Applied Artificial Intelligence, DOI: 10.1080/08839514.2024.2371168.
Hair, J. F. et al. (2019). Multivariate data analysis. (8th ed.). Boston: Cengage Learning.
Lemke, C. et al. (2023). Exploring the student perspective: Assessing technology readiness and acceptance for adopting large language models in higher education. In 22nd European Conference on e-Learning: ECEL 2023. United Kingdom: Academic Conferences and publishing limited.
Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 22(140), 1-55.
McKinsey & Company. (2024). The state of AI in 2023: Generative AI’s breakout year. Retrieved November 27, 2025, from https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
Ministry of Higher Education Science Research and Innovation. (2024). Minister Supamas announces "MHESI for AI" policy. Retrieved November 27, 2025, from https://www.mhesi.go.th/index.php/en/news-and-announce-all/news-all/106-minister-supamas/10698-ai-ai-university-education-6-0-90-ai.html
National Science and Technology Development Agency. (2022). Thailand National AI Strategy and Action Plan (2022-2027). Retrieved November 27, 2025, from https://www.nectec.or.th/en/about/news/cabinet-national-ai-strategy.html
Rovinelli, R. J. & Hambleton, R. K. (1977). On the use of content specialists in the assessment of criterion-referenced test item validity. Dutch Journal of Educational Research, 2(2), 49-60.
Stanford Institute for Human-Centered AI. (2025). The AI Index 2025 Annual Report. Retrieved November 27, 2025, from https://hai.stanford.edu/ai-index/2025-ai-index-report
UNESCO. (2023). Guidance for generative AI in education and research. Retrieved November 27, 2025, from https://unesdoc.unesco.org/ark:/48223/pf0000386693
Venkatesh, V. et al. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157-178.
Yamane, T. (1967). Statistics: An introductory analysis. (2nd ed.). New York: Harper and Row.