Personhood of autonomous systems: Morality
This is the fourth article in our series on the personhood of autonomous systems. We started this discussion by exploring the concept of autonomy from multiple facets, such as legal, philosophical, and technological. The second and third articles elaborated on Kant’s concept of autonomy and perceived autonomy in the computer science domain. In this article, we take a look at morality in the context of autonomous systems.
Types of moral reasoning: Universal and Particular
One of the biggest questions autonomous systems will face is how will they perform moral reasoning to take ethical actions? Further, there will be a follow-up question to this. Should there be a framework that defines universal moral reasoning for all possible autonomous systems? In ethical theory, one can understand the universality of moral reasoning as a situation where there is always a reason for performing an action when an agent is morally bound to perform the said action. Kant’s theory implies that it is possible to have a type of reasoning that can be universal.
On the other hand, particular moral reasoning does not justify moral obligations based on universal reasoning. It looks for individual instances where the rules of morality emerge. It is possible that the rules of morality may differ from one instance to another.
Human moral reasoning is a combination of judgement, experience, emotions, and comprehension. It is dependent on cultural, religious, political, and societal factors. History has multiple examples wherein human moral reasoning has tackled moral dilemmas by balancing moral values and societal interests. For autonomous systems, it is reasonable to believe that they will act as a companion to human beings. In this capacity, respecting the privacy of human beings is a reasonable expectation from autonomous systems.
However, if that is not the case, this situation directly threatens privacy, human autonomy, and dignity. Or in other words, we need to design autonomous systems in a manner that they cannot deceive, manipulate, or trick humans into doing something they will not do in the regular course of their life. The essence of the free will of human beings and rational thinking must exist in a civilised society.
Is it possible to find common ground?
The Universal Declaration of Human Rights (1948) can be considered a common ground for universal moral reasoning. It sets out basic human rights that are indivisible, universal, and inviolable. On the other hand, particular moral reasoning will look to limit factors relevant to reasoning based on the technological capabilities of autonomous systems.
Working of autonomous weapons is an interesting example here. Suppose, there exists an autonomous weapon that can only target and destroy buildings. This system will not consider factors such as building appearance, the intention behind its use, or retaliation by human soldiers. Now, let’s give the capability of targeting human soldiers to this autonomous system. This situation will necessitate the system to perform moral reasoning and comply with the proportionality of actions, unnecessary suffering, and distinction.
Endnotes
Moral reasoning in machines may or may not deduce the relative value or significance of available human rights. This can lead to inconsistent application and exercise of arbitrary actions by autonomous systems. For instance, an autonomous system may incorporate cultural preferences or bias in arriving at moral conclusions. One possible solution is to design autonomous systems that remain value-neutral while taking decisions that can impact human lives.
An autonomous system should not differentiate or favour individuals based on gender, religious, racial, or cultural biases. The German administration for autonomous vehicles has codified this explanation in one of its guidelines. This guideline states that
in the event of unavoidable accident situations, any distinction based on personal features such as age, gender, physical, or mental constitution is prohibited.
Featured Image Credits: Image by Gerd Altmann from Pixabay