A Case for Attributing Moral Agency to AI
2.5.2021
Nilesh Khetan, the Secretary of alt+Law, analyzes the relevance of moral attribution to Artificial Intelligence in an age of breakneck technological development.
Introduction
As machines become increasingly autonomous, talk of singularity in technological development, at which point machines will start designing themselves and create superintelligence (think Marvel’s Ultron), has become a mainstay in conversations about artificial intelligence (“AI”). As mankind traverses this unchartered territory, issues pertaining to AI ethics have assumed greater importance.
One of the salient ethical issues has been whether moral agency should be attributed to AI. In this article, I argue that there is little to no room for moral agency to be attributed to AI, at least from an anthropocentric point of view. However, a case may be made for moral agency to be attributed to AI if we depart from this human-centered perspective, instead focusing on moral agency as an independent attribute in and of itself.
As an aside, moral agency (i.e. AI’s ability to make moral judgments based on some notion of right and wrong, and to be held accountable for these actions) should be distinguished from moral patiency (i.e. when we owe obligations to AI) and moral responsibility (i.e. when to assign responsibility to human and artificial agents).
An Anthropocentric Construct of Moral Agency
Historically, the concept of agency has been tightly tied to human beings and their capacities. There is, however, no consensus about the necessary or sufficient features defining the concept. Nonetheless, it is safe to say that, in humans, agency is not a binary property but that its emergence takes place gradually during the physical, social, and psychological development of a child (Hallamaa & Kalliokoski, 2020, p. 57).
Hence, attributing agency to a machine requires discerning its actions on our part, and whether it has reasons at all (Hooker, 2018, p. 212). It is not clear whether machines can have the same moral capacities as humans – they are given agency in the sense that they do things in the world, and these actions have moral consequences (Coeckelbergh, AI Ethics, 2020, p. 50).
However, before addressing the key arguments for and against the attribution of moral agency to AI, we must first assess whether this debate is even necessary to begin with.
The Elephant in the Room – Is Consciousness a Pre-Requisite for Moral Agency?
Some opponents of attributing moral agency to AI have argued that this entire argument is moot - because AI is not conscious and, without it, there is no point in attributing moral agency to it. Their argument is premised on the fact that machines have no higher order, lifelike complexity emerging from them (Rushkoff, 2019, p. 129).
Firstly, intelligence and reality do not exist without human consciousness to render them, and the latter cannot be reduced to raw processing power.
Secondly, physicists accept that consciousness has a better claim to existence than objective reality, as quantum theory holds that objective reality may not exist until we observe it. In other words, the universe is a bunch of possibilities until someone’s consciousness arrives on the scene and sees it a certain way. Then it condenses into what we call reality (Rushkoff, 2019, pp. 130-131).
However, it is submitted that these opponents fail to draw a distinction between moral status and moral agency. It is widely agreed that current AI systems have no moral status, given that we may change, copy, terminate, delete, or use computer programs as we please – more pertinently, the principle of non-discrimination clearly states that beings have the same moral status if, inter alia, they have the same functionality (Bostrom & Yudkowsky, 2019). In this regard, there are a host of biological and socio-cultural functions that cannot be reduced to algorithms which AI can replicate.
In sum, while the argument that consciousness is a necessary pre-requisite is relevant insofar conversations on moral status are concerned, the same does not apply to the issue of whether moral agency should be attributed to AI. This submission is further entrenched if one departs from the anthropocentric perspective of moral agency, as will be discussed later. This article will now examine some of the salient arguments made with respect to the attribution of moral agency to AI, both in favour and against.
Moral Agency Should Be Attributed To AI
It has been argued that moral agency should be attributed to AI as it is possible and desirable to give machines a human kind of morality. This is because machines might be even better than human beings at moral reasoning since they are always rational and do not get carried away by their emotions (Anderson & Anderson, 2011, pp. 1-4).
The ‘Two Pillars’ of Moral Agency
While it is conceded that ‘reason’ is a necessary pillar of moral agency, it is submitted that human emotions play an equally important role by grounding morality in deontological foundation, thus preventing the advent of “psychopath AI” (i.e. AI that is perfectly rational, but insensitive to human concerns). In this sense, morality cannot be reduced to rules programmable via algorithms (Coeckelbergh, Moral Appearances: Emotions, Robots, and Human Morality, 2010, pp. 235-241).
Flawed Programming
Furthermore, current AI is not sophisticated enough to be immune from the risk of “garbage in, garbage out”. In 2016, Microsoft Corporation released an artificial chatbot intelligence “Tay”, named after the acronym “Thinking About You”. Tay was programmed to ingest tweets and learn from them by responding and interacting with those users. However, the experiment lasted only 24 hours before Tay had to be taken offline for publishing extreme and offensive racist as well as sexist tweets - Tay did not have the ability to filter offensive inputs or bigoted comments in the process (Tamboli, 2019, p. 12).
Moral Agency Should Not Be Attributed To AI
Hence, a stronger argument might be made against attributing moral agency to AI. There are two main reasons.
First, machines do not have the required capacities for moral agency such as mental states, emotions or free will (Coeckelbergh, AI Ethics, 2020, p. 51) – they are produced and used by humans, and only these humans have freedom and are able to act and decide morally (Johnson, 2006, pp. 195-204).
Second, AI lacks and will always lack, autonomy and reflective self-control (Hakli & Makela, 2019). The complexity and self-learning features of machines have an impact on responsibility assessment, making it harder to determine where both praise and blame lie.
However, it must be noted that the arguments presented so far have been largely circumscribed by, and within, anthropocentric views of what moral agency entails. To this end, there have been suggestions made to assess the viability of attributing moral agency to AI independent of anthropocentric views.
An Alternative – Death to Anthropocentricity?
Specifically, it has been argued that moral agency should instead be dependent on having a sufficient level of interactivity, autonomy and adaptivity and on being capable of morally qualifiable action (Floridi & Sanders, 2004, pp. 349-379). Or, to put it another way, a machine need only satisfy the formal properties of agency, namely an ability to provide a rationale for its actions (Hooker, 2018, p. 213).
In the context of AI, it is submitted that the following formulation can provide some clarity - if an AI is autonomous from programmers and we can explain its intentions by ascribing moral intentions to it, and if it behaves in a way that shows an understanding of its own responsibility to other moral agents, then that AI is a moral agent (Sullins, 2006, pp. 23-30). Thus, these views do not require full moral agency in the sense of human moral agency, but is in principle independent of full human moral agency and the human capacities required for that (Coeckelbergh, AI Ethics, 2020, p. 54).
A Step too Far?
However, such a view may be deemed to stray too far away from our human understanding and acceptance of morality. Many people think moral agency is and should be connected to humanness and personhood. They are not willing to endorse posthumanist or transhumanist notions (Coeckelbergh, AI Ethics, 2020, p. 54).
Nevertheless, this author submits that the aforementioned fear is overstated, when one appreciates the influence and utility of ethics on a macro-level. The purpose of ethics is to help us agree on how to live and work together – it is a negotiation tool, providing a thought framework within which we can reach a rational consensus on the ground rules. In this regard, we would arguably not be straying too far from the socially accepted standards of morality as the AI will rationalize and negotiate its actions within this widely accepted thought framework.
Conclusion – Setting our Priorities Straight
Similar to any other scientific and technological field, the technical problems strewn on the path of development in AI are complex and challenging. Hence, it is admirable that, gradually, experts have started taking note of the hairy, ethical problems that can arise such as the potential attribution of moral agency to AI.
The anthropocentric discourse vis-à-vis AI and moral agency may be justified on the basis that the good and dignity of humans take priority over whatever technology may require or do (Coeckelbergh, AI Ethics, 2020, p. 183). Notwithstanding that it may seem counter-intuitive, adopting a human-centric approach may not be necessary, or even appropriate, given that the philosophy of technology shows that there are nuanced and more sophisticated ways to define the relationship between humans and technology (Coeckelbergh, AI Ethics, 2020, p. 184).
However, it must be recognized that there is an inherent tension between the amount of technological progress that has to happen before ethical issues start to become relevant, and the pressing urgent need to address them before it is too late. At what point do developers, regulators and even society at large need to start concerning themselves with the ethical implications of such developments? Often, the answer is only clear in retrospection (Sarangi & Sharma, 2019, p. 77).
Hence, this author is of the view that we should take a preemptive, ex ante approach in considering and evaluating the ethical and moral ramifications of technological developments, especially where such progress is being made at an exponential rate. To this end, the debate on whether moral agency should be attributed to AI is indeed relevant in today’s world.
Note: The information contained in this site is provided for informational purposes only and should not be construed as legal advice on any subject matter. You should not act or refrain from acting on the basis of any content included in this site without seeking legal or other professional advice.
Sources:
- Anderson, M., & Anderson, S. (2011). Machine Ethics. Cambridge: Cambridge University Press.
- Bostrom, N., & Yudkowsky, E. (2019). The Ethics of Artificial Intelligence. In R. V. Yampolskiy, Artificial Intelligence Safety and Security (pp. 57-50). Florida: Taylor & Francis Group.
- Coeckelbergh, M. (2010). Moral Appearances: Emotions, Robots, and Human Morality. Ethics and Information Technology 12, no. 3.
- Coeckelbergh, M. (2020). AI Ethics. Cambridge: The MIT Press.
- Floridi, L., & Sanders, J. (2004). On the Morality of Artificial Agents. Minds and Machines 14, no. 3.
- Hakli, R., & Makela, P. (2019). Moral Responsibility of Robots and Hybrid Agents. The Monist, Volume 102, Issue 2, 259-275.
- Hallamaa, J., & Kalliokoski, T. (2020). How AI Systems Challenge the Conditions of Moral Agency? . LCNS, vol12215, 57.
- Hooker, J. (2018). Taking Ethics Seriously. Boca Raton: Productivity Press.
- Johnson, D. G. (2006). Computer Systems: Moral Entities but not Moral Agents. Ethics and Information Technology 8, no. 4.
- Rushkoff, D. (2019). Team Human. New York: W.W. Norton & Company.
- Sarangi, S., & Sharma, P. (2019). Artificial Intelligence: Evolution, Ethics and Public Policy. New York: Routledge.
- Sullins, J. (2006). When is a Robot a Moral Agent? International Review of Information Ethics 6.
- Tamboli, A. (2019). Keeping Your AI Under Control. New South Wales: Apress.