CYBEROFFENDERS OR SYSTEMIC FAILURES? UNDERSTANDING ALGORITHMIC MISCONDUCT
DOI:
https://doi.org/10.25215/9358795115.08Abstract
The widespread deployment of Algorithmic Decision-Making (ADM) systems across critical sectors such as finance, criminal justice, welfare administration, and digital governance has fundamentally altered the nature of cyber harm and liability. This chapter interrogates whether large-scale digital injuries should primarily be attributed to malicious human actors—cyber offenders—or to structural weaknesses embedded within techno-governance systems, described here as systematic failures. It argues that algorithmic misconduct exists along a spectrum shaped by the interaction between intentional cybercrime and unintentional yet foreseeable algorithmic bias, opacity, and regulatory gaps. While cyber offenders increasingly employ artificial intelligence to enhance the scale and sophistication of crimes such as data poisoning, evasion attacks, and deepfake fraud, the most significant risks arise where systematic failures provide conditions of plausible deniability and mass impact. The chapter concludes that addressing algorithmic misconduct requires moving beyond intent-based criminal models toward a framework of algorithmic accountability grounded in design obligations, institutional responsibility, and ethical governance.Published
2026-01-15
Issue
Section
Articles
