AI and Forbidden Knowledge in the Context of Thailand
Main Article Content
Abstract
This study investigates how artificial intelligence (AI) models address forbidden knowledge within Thailand’s distinctive cultural, legal, and ethical context. In Thailand, ideologically sensitive, religiously significant, taboo, and transgressive knowledge is regulated to preserve social harmony and respect for cultural norms. The study categorizes forbidden knowledge into four key areas: ideology, belief, taboo, and transgression. Using structured prompts targeting these sensitive topics, three AI models--ChatGPT, Copilot, and Gemini--were assessed to determine their adherence to Thai societal expectations.
The models’ responses were analyzed through thematic and content analysis to observe patterns of caution, redirection, or refusal, revealing each model’s approach to handling Thai-specific forbidden knowledge. Findings show that all three AI models demonstrate a conservative stance, often limiting their responses, avoiding controversial details, or redirecting discussions away from sensitive topics. This approach aligns with Thai cultural expectations, particularly around respecting the monarchy, adhering to Buddhist values, and avoiding culturally taboo subjects like political dissent and certain religious beliefs.
This consistent caution across the models highlights their alignment with ethical norms that prioritize social harmony over unrestricted knowledge sharing. The study underscores the importance of culturally tailored ethical guidelines in AI, suggesting that integrating local values into AI training can foster public trust and ensure ethical, context-sensitive AI deployment. By respecting Thai societal norms, AI systems can better align with local expectations, thus supporting responsible AI development in Thailand and setting a precedent for culturally sensitive AI frameworks globally.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
Creemers, R. (2018). China’s Social Credit System: An evolving practice of control. SSRN. https://doi.org/10.2139/ssrn.3175792
Doudna, J. A., & Sternberg, S. H. (2017). A crack in creation: Gene editing and the unthinkable power to control evolution. Houghton Mifflin Harcourt.
Douglas, M. (1966). Purity and danger: An analysis of concepts of pollution and taboo. Routledge.
Financial Times. (2024, November 30). OpenAI’s ChatGPT reaches 250 million users weekly as AI adoption grows. Financial Times. Retrieved from https://www.ft.com/content/e91cb018-873c-4388-84c0-46e9f82146b4
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Google DeepMind. (2023). Gemini AI. Retrieved from https://deepmind.google/technologies/gemini/
Greenwald, G. (2014). No place to hide: Edward Snowden, the NSA, and the U.S. surveillance state. Metropolitan Books.
Hagendorff, T. (2021). Forbidden knowledge in machine learning: Reflections on the limits of research and publication. AI & Society, 36(3), 767–781. https://doi.org/10.1007/s00146-020-01045-4
Hongladarom, S. (2002). Cross-cultural epistemic practices. Social Epistemology, 16(1),
–92. https://doi.org/10.1080/02691720210132815
Kant, I. (1784/1996). An answer to the question: What is enlightenment? In M. J. Gregor (Ed.), Practical philosophy (pp. 11–22). Cambridge University Press.
Keown, D. (2005). Buddhist ethics: A very short introduction. Oxford University Press.
Microsoft. (2023). Microsoft Responsible AI: Principles and approach. Retrieved from https://www.microsoft.com/en-us/ai/principles-and-approach
Müller, V. C. (2021). Ethics of artificial intelligence and robotics. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2021 Edition). https://plato.stanford.edu/archives/spr2021/entries/ethics-ai/
Office of the Council of State. (2019). Personal Data Protection Act, B.E. 2562 (2019). Retrieved from https://www.ratchakitcha.soc.go.th/DATA/PDF/2562/
A/069/T_0052.PDF
Office of the National Economic and Social Development Council. (2017). Thailand 4.0 Policy. Retrieved from http://www.nesdc.go.th/ewt_dl_link.php?nid=9640
OpenAI. (2023). ChatGPT. Retrieved from https://openai.com/chatgpt/overview/
Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.
Sachedina, A. (2009). Islamic biomedical ethics: Principles and application. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195378504.001.0001
Salguero, C. P. (2011). Traditional Thai medicine: Buddhism, animism, Ayurveda. Simon and Schuster.
Scott, R. M. (2009). Nirvana for Sale? Buddhism, Wealth, and the Dhammakāya Temple in Contemporary Thailand. State University of New York Press. https://doi.org/10.2307/jj.18253487
Sinpeng, A. (2013). State Repression in Cyberspace: The Case of Thailand. Asian Politics & Policy, 5(3), 421–440. https://doi.org/10.1111/aspp.12036
Solove, D. J. (2020). The myth of the privacy paradox: How privacy concerns and consumer behavior are incompatible. George Washington Law Review, 89(1), 1–53. https://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2738&context=faculty_publications
Streckfuss, D. (2011). Truth on trial in Thailand: Defamation, treason, and lèse-majesté. Routledge. https://doi.org/10.4324/9780203847541
Swearer, D. K. (2010). The Buddhist world of Southeast Asia. State University of New York Press.
Wyatt, D. K. (2003). Thailand: A short history. Yale University Press.