Meaningful Human Control and Responsibility Gaps in AI: No Culpability Gap, but Accountability and Active Responsibility Gap
Main Article Content
Abstract
At the current stage of technological development, the rapid advancement of Artificial Intelligence (AI) has given rise to various ethical concerns. Among these, the “Responsibility Gap” notion has appeared as a prominent issue. Within the scholarly literature, ethicists primarily focus on culpability (or blameworthiness). The central question is: when the development or use of AI results in morally harmful outcomes, who bears moral responsibility? This article argues that moral responsibility encompasses multiple distinct forms, each fulfilling specific functions within a society, especially in the context of AI development and application. Then, three forms of responsibility are considered: culpability, accountability, and active responsibility. Each carries unique social and ethical implications. Drawing on the concept of “meaningful human control,” which serves as a foundational framework, this article contends that the gap in culpability is not as significant or troubling as often suggested in existing research. Instead, the more pressing ethical challenges are associated with gaps in accountability and active responsibility. To address these challenges, this article elaborates on the “tracing condition,” a key element of meaningful human control, to mitigate and prevent morally harmful outcomes and the absence of human responsibility in the age of AI.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
Banks, V. A., Plant, K. L., & Stanton, N. A. (2018). Driver error or designer error: Using the
Perceptual Cycle Model to explore the circumstances surrounding the fatal Tesla crash on 7th May 2016. Safety Science, 108, 278–285. https://doi.org/10.1016/j.ssci.2017.12.023
Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775–779.
https://doi.org/10.1016/0005-1098(83)90046-8
Bowie, N. E. (1998). A Kantian Theory of Meaningful Work. Journal of Business Ethics, 17,
–1092. https://doi.org/10.1023/A:1006023500585
Burton, E., Goldsmith, J., Koenig, S., Kuipers, B., Mattei, N., & Walsh, T. (2017). Ethical
considerations in artificial intelligence courses. AI Magazine, 38(2),
–34. https://doi.org/10.1609/aimag.v38i2.2731
Cavalcante Siebert, L., Lupetti, M. L., Aizenberg, E., Beckers, N., Zgonnikov, A.,
Veluwenkamp, H., Abbink, D., Giaccardi, E., Houben, G.-J., Jonker, C. M., Van den Hoven, J., Forster, D., & Lagendijk, R. L. (2023). Meaningful human control: actionable properties for AI system development. AI Ethics, 3, 241–255. https://doi.org/10.1007/s43681-022-00167-3
Coeckelbergh, M. (2016). Responsibility and the moral phenomenology of using self-driving
cars. Applied Artificial Intelligence, 30, 748–757.
Coeckelbergh, M. (2023). Narrative responsibility and artificial intelligence: How AI
challenges human responsibility and sense-making. AI & Society, 38(6), 2437–2450. https://doi.org/10.1007/s00146-021-01375-x
Cummings, M. (2014). Automation and accountability in decision support system interface
design. The Journal of Technology Studies, 32(1). https://doi.org/10.21061/jots.v32i1.a.4
Danaher, J. (2016). Robots, law and the retribution gap. Ethics Inf Technol, 18, 299–309.
https://doi.org/10.1007/s10676-016-9403-3
Di Nucci, E., & Santoni de Sio, F. (2014). Who’s afraid of robots? Fear of automation and the
ideal of direct control. In Battaglia, F., & Weidenfeld, N. (Eds.), Roboethics in Film. Pisa University Press.
Doorn, N. (2012). Responsibility ascriptions in technology development and engineering:
Three perspectives. Science and Engineering Ethics, 18(1), 69–90. https://doi.org/10.1007/s11948-009-9189-3
Floridi, L., & Sanders, J. (2004). On the Morality of Artificial Agents. Minds and Machines
, 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Fischer, J., & Ravizza, M. (1998). Responsibility and control: A theory of moral
responsibility. Cambridge University Press.
Gardner, J. (2003). The mark of responsibility. Oxford Journal of Legal Studies, 23(2),
–171. https://doi.org/10.1093/ojls/23.2.157
Goetze, T. (2022). Vicarious responsibility in AI systems. In Proceedings of the 2022 ACM
Conference on Fairness, Accountability, and Transparency (FAccT ’22)
(pp. 390-400). ACM. https://doi.org/10.1145/3531146.3533106
Griffin, T. A., Green, B. P., & Welie, J. V. M. (2024). The ethical wisdom of AI
developers. AI and Ethics. https://doi.org/10.1007/s43681-024-00458-x
Grosz, B. J., Grant, D. G., Vredenburgh, K., Behrends, J., Hu, L., Simmons, A., & Waldo, J.
(2019). Embedded EthiCS: Integrating ethics across CS education. Communications of the ACM, 62(8), 54–61. https://doi.org/10.1145/3330794
Gunkel, D. J. (2017). Mind the gap: Responsible robotics and the problem of
responsibility. Ethics and Information Technology, 22(4), 307–320.
Hardebolle, C., Héder, M., & Ramachandran, V. (2025). Engineering ethics education and
artificial intelligence. In S. Chance, T. Børsen, D. A. Martin, R. Tormey, T. T. Lennerfors, & G. Bombaerts (Eds.), The Routledge International Handbook of Engineering Ethics Education (1st ed., pp. 125–141). Routledge. https://doi.org/10.4324/9781003464259
Hanson, F. A. (2009). Beyond the skin bag: On the moral responsibility of extended agencies.
Ethics and Information Technology, 11(2), 91–99. https://doi.org/10.1007/s10676-009-9184-z
Himmelreich, J. (2019). Responsibility for killer robots. Ethical Theory and Moral Practice,
(4), 731–747. https://doi.org/10.1007/s10677-019-10007-9
Himmelreich, J., & Köhler, S. (2022). Responsible AI Through Conceptual
Engineering. Philos. Technol, 35, 60. https://doi.org/10.1007/s13347-022-00542-2
Hindriks, F., & Veluwenkamp, H. (2023). The risks of autonomous machines: from
responsibility gaps to control gaps. Synthese, 201, 21. https://doi.org/10.1007/s11229-022-04001-5
Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Buckingham
Shum, S., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504–526. https://doi.org/10.1007/s40593-021-00239-1
Köhler, S., Roughley, N., & Sauer, H. (2017). Technologically blurred accountability?
Technology, responsibility gaps and the robustness of our everyday conceptual scheme. In Moral agency and the politics of responsibility (pp. 51–68). Routledge
Königs, P. (2022). Artificial intelligence and responsibility gaps: What is the problem? Ethics
and Information Technology, 24(36). https://doi.org/10.1007/s10676-022-09643-0
Kopec, M., Magnani, M., Ricks, V., Torosyan, R., Basl, J., Miklaucic, N., Muzny, F.,
Sandler, R., Wilson, C., Wisniewski-Jensen, A., Lundgren, C., Baylon, R., Mills, K., & Wells, M. (2023). The effectiveness of embedded values analysis modules in computer science education: An empirical study. Big Data & Society, 10(1), 20539517231176230. https://doi.org/10.1177/20539517231176230
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of
learning automata. Ethics and Information Technology, 6(3), 175–183. https://doi.org/10.1007/s10676-004-3422-1.
Mecacci, G., & Santoni de Sio, F. (2020). Meaningful human control as reason-
responsiveness: the case of dual-mode vehicles. Ethics Inf Technol, 22, 103–115. https://doi.org/10.1007/s10676-019-09519-w
Nyholm, S. (2018). Attributing agency to automated systems: Reflections on human-robot
collaborations and responsibility-loci. Science and Engineering Ethics, 24,
–1219. https://doi.org/10.1007/s11948-017-9943-x
Pesch, U. (2015). Engineers and Active Responsibility. Sci Eng Ethics, 21, 925–939.
https://doi.org/10.1007/s11948-014-9571-7
Santoni De Sio, F. (2016). Ethics and Self-driving Cars: A White Paper on Responsible
Innovation in automated Driving Systems.
Santoni de Sio, F., & van den Hoven, J. (2018). Meaningful human control over autonomous
systems: A philosophical account. Frontiers in Robotics and AI, 5, Article 15. https://doi.org/10.3389/frobt.2018.00015
Santoni de Sio, F., & Mecacci, G. (2021). Four Responsibility Gaps with Artificial
Intelligence: Why they Matter and How to Address them. Philos. Technol, 34,
–1084. https://doi.org/10.1007/s13347-021-00450-x
Searle, J. R. (2007). Freedom & neurobiology: Reflections on free will, language, and
political power. Columbia University Press.
Segessenmann, J., Stadelmann, T., Davison, A., & Dürr, O. (2025) Assessing deep learning: a
work program for the humanities in the age of artificial intelligence. AI Ethics, 5,
–32. https://doi.org/10.1007/s43681-023-00408-z
Soltanzadeh, S. (2025). A metaphysical account of agency for technology governance. AI &
Soc, 40, 1723-1734. https://doi.org/10.1007/s00146-024-01941-z
Strawson, P. F. (1974). Freedom and resentment. In P. F. Strawson (Ed.), Freedom and
resentment and other essays (pp. 1–25). Methuen.
Siebert, L. C., Lupetti, M. L., Aizenberg, E., Beckers, N., Zgonnikov, A., Veluwenkamp, H.,
Abbink, D., Giaccardi, E., Houben, G.-J., Jonker, C. M., van den Hoven, J., Forster, D., & Lagendijk, R. L. (2023). Meaningful human control: Actionable properties for AI system development. AI and Ethics, 3(2), 241–255. https://doi.org/10.1007/s43681-022-00167-3
Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62–77.
https://doi.org/10.1111/j.1468-5930.2007.00346.x
Touretzky, D., Gardner-McCune, C., Martin, F., & Seehorn, D. (2019). Envisioning AI for
K-12: What should every child know about AI? Proceedings of the AAAI Conference on Artificial Intelligence, 33(1), 9795–9799. https://doi.org/10.1609/aaai.v33i01.33019795
Tigard, D.W. (2021). There Is No Techno-Responsibility Gap. Philos. Technol, 34, 589–607.
https://doi.org/10.1007/s13347-020-00414-7
Unruh, G. C. (2000). Understanding carbon lock-in. Energy Policy, 28(12), 817-830.
https://doi.org/10.1016/S0301-4215(00)00070-7
Vakkuri, V., & Kemell, K.-K. (2019). Implementing AI ethics in practice: An empirical
evaluation of the RESOLVEDD strategy. Springer International Publishing. https://doi.org/10.1007/978-3-030-35151-7_21
Van de Poel, I., & Sand, M. (2021). Varieties of responsibility: two problems of responsible
innovation. Synthese, 198(Suppl 19), 4769–4787. https://doi.org/10.1007/s11229-018-01951-7
Van den Hoven, J. (2013). Value-sensitive design and responsible innovation. In R. Owen, J.
Bessant, & M. Heintz (Eds.), Responsible innovation (pp. 75–83). Wiley. https://doi.org/10.1002/9781118551424.ch4
Van de Poel, I., Nihlén Fahlquist, J., Doorn, N., Zwart, S., & Royakkers, L. (2012). The
problem of many hands: Climate change as an example. Science and Engineering Ethics, 18(1), 49–67. https://doi.org/10.1007/s11948-011-9276-0
Williams, G. (2008). Responsibility as a virtue. Ethical Theory and Moral Practice, 11(4),
–470. https://doi.org/10.1007/s10677-008-9109-7
Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). Algorithmic decision-making and
the control problem. Minds and Machines, 29(4), 555–578. https://doi.org/10.1007/s11023-019-09513-7