AUTONOMOUS SYSTEMS AND CRIMINAL ACCOUNTABILITY: BEYOND HUMAN ACTOR

Authors

  • Shefali Mahendru, Mandeep Kaur Mann

DOI:

https://doi.org/10.25215/9358795115.12

Abstract

The emergence of autonomous systems has profoundly challenged the foundations of criminal law, which is built on the principles of actus reus (guilty act) and mens rea (guilty mind), both presuming a conscious human actor. Artificial intelligence technologies such as self-driving vehicles, drones, and algorithmic decision-makers increasingly perform independent actions that can result in harm without direct human intent. This raises a critical question: who should be held criminally accountable—the programmer, manufacturer, operator, or the autonomous system itself? The paper examines this accountability gap by reassessing traditional legal doctrines of intention, causation, and responsibility in light of machine autonomy. It explores how jurisdictions such as the European Union and the United States are addressing AI accountability through frameworks including the EU Artificial Intelligence Act and existing negligence doctrines, while India remains in a regulatory vacuum. The discussion evaluates theoretical models including algorithmic mens rea, hybrid liability, and electronic personhood, weighing their moral and jurisprudential implications. Rejecting the notion of AI personhood, the paper advocates a re-conceptualisation of criminal liability based on distributed human oversight and systemic responsibility. It proposes legislative reform that integrates ethical design, algorithmic transparency, and anticipatory governance. Ultimately, it argues that the future of criminal accountability lies not in transferring blame to machines but in strengthening human responsibility and regulatory mechanisms to ensure justice in an era where decision-making increasingly transcends human control.

Published

2026-01-15