Journal of Integrative and Innovative Humanities https://so07.tci-thaijo.org/index.php/DJIIH <p><strong><em>Journal of Integrative and Innovative Humanities</em></strong> aims to promote the importance of interdisciplinary studies and the coalescence between humanities and other areas such as science – be it natural-, social-, or applied science, economics, and business administration. The journal publishes interdisciplinary papers, bridging the gap between humanities and other disciplines, and emphasizing the critical role of humanities in any fields of study’s discussion and innovation. Papers are double-blind reviewed by at least two reviewers and are selected based on the basis of their quality, originality, soundness of their arguments, and contribution. The journal is open-access and two issues are brought out in the months of <em>May</em> and <em>November</em> each year.</p> en-US pasoot.lasuka@cmu.ac.th (Assistant Professor Dr. Pasoot Lasuka) jiih-human@cmu.ac.th (Nattakarn Sanit-in) Tue, 27 May 2025 17:19:32 +0700 OJS 3.3.0.8 http://blogs.law.harvard.edu/tech/rss 60 Restraining Cultural Stereotyping in Computational Linguistics through Computational Ethics https://so07.tci-thaijo.org/index.php/DJIIH/article/view/6286 <p>Computational linguistics is one of the achievements of science and technology in the 21st century. It has the ability to enable machines to understand, analyze and process human language with the aid of algorithms. Computational linguistics can in advertently perpetuate cultural stereotypes if not carefully considered in the development of language processing algorithms and models. It is important for computation al linguists to be aware of the potential biases in their work and strive to create inclusive and culturally sensitive tools and resources. Computational ethics, can promote diversity and inclusivity in computational linguistics, we can help mitigate the impact of cultural stereotypes and contribute to a more equitable and respectful society. Using a philosophical method of analysis, this study finds that cultural stereotypes can result from the misrepresentation and misunderstanding of cultural nuances, privacy violations, and many others. How can these moral issues be addressed? The study concludes that the implementation of computational ethics in the development of algorithms which recognize linguistic diversities can promote fairness, transparency, and respect for human rights.</p> Ikechukwu Monday Osebor, Carol C. Ohen Copyright (c) 2025 Ikechukwu Monday Osebor, Carol C. Ohen https://creativecommons.org/licenses/by-nc-nd/4.0 https://so07.tci-thaijo.org/index.php/DJIIH/article/view/6286 Tue, 27 May 2025 00:00:00 +0700 Editorial Article https://so07.tci-thaijo.org/index.php/DJIIH/article/view/7962 Copyright (c) 2025 https://creativecommons.org/licenses/by-nc-nd/4.0 https://so07.tci-thaijo.org/index.php/DJIIH/article/view/7962 Tue, 27 May 2025 00:00:00 +0700 AI and Forbidden Knowledge in the Context of Thailand https://so07.tci-thaijo.org/index.php/DJIIH/article/view/6201 <p>This study investigates how artificial intelligence (AI) models address forbidden knowledge within Thailand’s distinctive cultural, legal, and ethical context. In Thailand, ideologically sensitive, religiously significant, taboo, and transgressive knowledge is regulated to preserve social harmony and respect for cultural norms. The study categorizes forbidden knowledge into four key areas: ideology, belief, taboo, and transgression. Using structured prompts targeting these sensitive topics, three AI models--ChatGPT, Copilot, and Gemini--were assessed to determine their adherence to Thai societal expectations. <br />The models’ responses were analyzed through thematic and content analysis to observe patterns of caution, redirection, or refusal, revealing each model’s approach to handling Thai-specific forbidden knowledge. Findings show that all three AI models demonstrate a conservative stance, often limiting their responses, avoiding controversial details, or redirecting discussions away from sensitive topics. This approach aligns with Thai cultural expectations, particularly around respecting the monarchy, adhering to Buddhist values, and avoiding culturally taboo subjects like political dissent and certain religious beliefs. <br />This consistent caution across the models highlights their alignment with ethical norms that prioritize social harmony over unrestricted knowledge sharing. The study underscores the importance of culturally tailored ethical guidelines in AI, suggesting that integrating local values into AI training can foster public trust and ensure ethical, context-sensitive AI deployment. By respecting Thai societal norms, AI systems can better align with local expectations, thus supporting responsible AI development in Thailand and setting a precedent for culturally sensitive AI frameworks globally.</p> Chananya Prasartthai Copyright (c) 2025 Chananya Prasartthai https://creativecommons.org/licenses/by-nc-nd/4.0 https://so07.tci-thaijo.org/index.php/DJIIH/article/view/6201 Tue, 27 May 2025 00:00:00 +0700 AI Ethics: Should you trust AI with your medical diagnosis? https://so07.tci-thaijo.org/index.php/DJIIH/article/view/6207 <p>The use of AI in the medical field leads to one main ethical question: Should you trust AI with your medical diagnosis? To answer this question, one must understand how AI decides, which faces two main problems: The black box and validation data. The first is a problem of transparency in which we have no idea how AI makes decisions. The second is a problem of how we can set the validation data for training AI. This paper aims to analyze both problems and clarify how AI decides in medical diagnoses. Then I will show that AI use for medical image processing is one of the models that doesn't face these problems since it can provide the evidence of diagnosis. Next, I will address the question of whether you should trust AI in medical diagnoses, and I will show that the answer depends on comparing the functions between humans and AI.</p> Weerawut Rainmanee Copyright (c) 2025 Weerawut Rainmanee https://creativecommons.org/licenses/by-nc-nd/4.0 https://so07.tci-thaijo.org/index.php/DJIIH/article/view/6207 Tue, 27 May 2025 00:00:00 +0700 Meaningful Human Control and Responsibility Gaps in AI: No Culpability Gap, but Accountability and Active Responsibility Gap https://so07.tci-thaijo.org/index.php/DJIIH/article/view/6206 <p>At the current stage of technological development, the rapid advancement of Artificial Intelligence (AI) has given rise to various ethical concerns. Among these, the “Responsibility Gap” notion has appeared as a prominent issue. Within the scholarly literature, ethicists primarily focus on culpability (or blameworthiness). The central question is: when the development or use of AI results in morally harmful outcomes, who bears moral responsibility? This article argues that moral responsibility encompasses multiple distinct forms, each fulfilling specific functions within a society, especially in the context of AI development and application. Then, three forms of responsibility are considered: culpability, accountability, and active responsibility. Each carries unique social and ethical implications. Drawing on the concept of “meaningful human control,” which serves as a foundational framework, this article contends that the gap in culpability is not as significant or troubling as often suggested in existing research. Instead, the more pressing ethical challenges are associated with gaps in accountability and active responsibility. To address these challenges, this article elaborates on the “tracing condition,” a key element of meaningful human control, to mitigate and prevent morally harmful outcomes and the absence of human responsibility in the age of AI.</p> Tatdanai Khomkhunsorn Copyright (c) 2025 Tatdanai Khomkhunsorn https://creativecommons.org/licenses/by-nc-nd/4.0 https://so07.tci-thaijo.org/index.php/DJIIH/article/view/6206 Tue, 27 May 2025 00:00:00 +0700