WHO IS LIABLE FOR ARTIFICIAL INTELLIGENCE? LEGAL RESPONSIBILITY IN AUTOMATED DECISION-MAKING
Main Article Content
Abstract
The rapid development and widespread use of artificial intelligence (AI) technologies have significantly transformed modern social, economic, and legal relations. Automated decision-making systems are increasingly applied in areas such as finance, healthcare, public administration, and law enforcement, which raises complex questions regarding legal liability for harm caused by AI systems. Traditional legal concepts of fault, causation, and responsibility often prove insufficient when applied to autonomous or semi-autonomous technologies. This article examines the key legal liability issues arising from the use of artificial intelligence, with particular attention to identifying responsible subjects, including developers, operators, and users. Using comparative legal analysis and formal-legal methods, the study explores existing regulatory approaches and highlights gaps in current legal frameworks. The findings demonstrate that the absence of clear liability rules undermines effective protection of individual rights and legal certainty. The article proposes conceptual approaches for improving legal regulation of AI-related liability in the context of automated decision-making.
Article Details
References
European Commission. (2021). Proposal for a Regulation on Artificial Intelligence (AI Act). Brussels: European Union.
Calo, R. (2016). Robotics and the Lessons of Cyberlaw. California Law Review, 103(3), 513–563.
Gasser, U., & Almeida, V. (2017). A Layered Model for AI Governance. IEEE Internet Computing, 21(6), 58–62.
European Parliamentary Research Service. (2020). Liability for Artificial Intelligence and Robotics. Brussels: European Union.
Cath, C. (2018). Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges. Philosophical Transactions of the Royal Society A, 376(2133), 20180080.