THE LIMITS OF INTENT, ARTIFICIAL INTELLIGENCE, AND THE COLLAPSE OF TRADITIONAL LIABILITY
DOI:
https://doi.org/10.25215/9358795115.05Abstract
The exponential rise of artificial intelligence (AI) throughout Indian industry and governance has ushered in transformative opportunities as well as unprecedented legal challenges, fundamentally disrupting conventional frameworks built upon the concepts of intent, fault, and human culpability. Traditional doctrines of liability—rooted in the assumption of personalized, conscious action—are increasingly strained when faced with the autonomous and opaque operations of modern AI systems, which can cause unintended harm without direct human oversight or predictable agency. In this landscape, legal attribution of responsibility becomes profoundly complex: existing Indian statutes such as the Information Technology Act of 2000, Indian Penal Code of 1860, Consumer Protection Act of 2019, and foundational principles of contract and tort law all presuppose the presence of a human actor capable of forming intent or acting negligently, a criterion that AI entities inherently do not satisfy. This paper critically explores the limitations of intent-based liability for AI within the Indian legal setting, analyzing how rapid technological advancement exposes gaps and ambiguities in current doctrinal and policy frameworks. Through comparative study, it is shown that global jurisdictions are actively developing new liability regimes—such as the EU AI Act and sectoral models in the US—that may inform India’s future path toward accountability and innovation. In addition, Indian scholarship and judicial discourse underline the urgent need to move beyond fragmented remedies and toward comprehensive regulation that addresses the autonomous and data-driven nature of AI decision-making. The research highlights pressing issues including algorithmic bias, lack of transparency, joint or chain liability, and insufficient victim protection, pointing out not only the doctrinal collapse of classic responsibility but also the necessity for policy reform and sector-specific standards. In conclusion, this paper advocates for the creation of AI-specific liability statutes, the establishment of an independent regulatory authority, procedural innovations such as mandatory audits for high-risk AI systems, and a contextual, multi-tiered approach combining strict liability and ethical oversight. These recommendations seek to balance innovation and public trust, ensuring that India’s evolving digital ecosystem is governed by fair, equitable, and effective legal remedies for emerging risks.Published
2026-01-15
Issue
Section
Articles
