Personhood of autonomous systems: What is autonomy?

Raj PagariyaLaw

Personhood of autonomous systems: What is autonomy?

Personhood of autonomous systems: What is autonomy?Over the course of the last couple of years, a majority of the research work that I have taken up has been focused on autonomous systems. While there is still plenty of existing literature that I have not explored, I will be summarising my learnings in this series of articles titled “Personhood of autonomous systems.” Before we jump the wagon to talk about artificial intelligence (AI) or autonomous systems and try to define them as systems capable of making decisions on their own or interpret technological terms into legal sense or vice versa, one must consider how the legal jurisprudence has evolved on the subject of autonomy.

What is autonomy?

Traditionally, the law has attributed personhood and autonomy to human beings by default (Laukyte, 2012). However, with non-human entities coming into the picture, these attributes are also assigned to companies, states, municipalities, deities, trusts, NGOs, etc. in some form or other. For human beings like you and me, legal personhood and individual autonomy will be closely intertwined. Once you attain the age of majority, you will achieve full autonomy while at the same time, the law will recognise you as a person with certain rights and duties.

Back in the eighteenth century, Kant introduced a revolutionary concept of his time – morality. In his words,

“Autonomy of the will is the sole principle of all moral laws, and of all duties which confirm to them.”

He called morality as a mechanism of self-governance and strongly believed that autonomy lies in an individual’s will (Kant, 2007). This means that for an individual to be recognised as a moral agent, he shall either be an autonomous or self-governing creature. According to Kant’s theory, autonomy can be a compass to specify what is consistent with duty and what is not (i.e., common human reason). The common human reason, also referred to as pure practical reason, belongs to human beings at large. Kant believed that one can’t lose moral capacities, irrespective of how corrupt they become (Schneewind, 1998).

How are autonomy and morality related?

Philosophically, autonomy is related to morality, will, and freedom. It is safe to presume that these three attributes exist in human beings. However, this presumption or understanding of autonomy gives rise to an interesting question:

How can artificial agents become autonomous with these three attributes?

As far as morality is concerned, we have seen time and again that artificial agents can take part in situations involving moral reasoning, either as an entity or agent (Indurkhya, 2019). At the same time, there are a good number of arguments that suggest that for artificial systems to be considered moral, they must be equipped with a certain level of cognitive knowledge so that they can access and analyse the effects and consequences of their prospective actions on human beings and act accordingly. This can be possibly achieved via three approaches (Allen et al., 2000):

  1. Directly programming moral values into the agent;
  2. Implementing associate learning models to make agents moral; and
  3. Simulating and evolving agents using the iterative prisoner’s dilemma (PD) game.

The first approach is bound to be problematic as moral values are highly subjective in nature. They may have different meanings for different individuals. So, for a team of human beings to sit down and decide on a set of values to guide artificial agents, may not be an efficient solution (Hemment et al., 2019). The second approach is derived from how children learn to distinguish between right or wrong, i.e., morally acceptable or not. A child’s learning is influenced by various factors such as parents’ approval, avoiding punishment, and acceptance by other children. Nevertheless, we also need to consider that artificial agents can have motives for their actions, just like children do (Arsiwalla et al., 2019).

In the third approach, iterative PD is suggested over simple PD and it differs from how a game of simple PD is played. In iterative PD, players do not know how many times they will play the game. However, they remember their previous actions. These previous actions drive their decision-making process and future strategy (Dennis & Slavkovik, 2018). A good criticism of this approach is that morality is substantially more complex than what a game of PD may account for.

It is also a matter of discussion as to whether artificial agents can conceptualise morality. Humans do have this ability, but whether the existing computer science technologies can achieve the same level of conceptualisation is a debatable topic. For human beings, personal relationships, upbringing, dignity, self-respect, people skills, and commitments are some of the qualities that affect morality (Coleman & Arete, 2001). As per the existing research, it is reasonable to construe that implementing these qualities through algorithms is still a theoretical concept. At this point in time, it appears to be a strenuously complex task for computer science engineering to design, model, and develop autonomy in a sense elaborated by Kant.

Autonomy in computer science

Autonomy in the computer science domain is loosely related to fields such as artificial intelligence and robotics. There is neither a clear distinction between autonomy and non-autonomy, nor there is a recognisable pattern in different use cases (Smithers, 1997). Similar to Kant’s theory of autonomy, autonomy in computer science remains a subjective concern. What is autonomous for one computer scientist may not be for another (Covrigrand & Lindsay, 1991). However, as a matter of general practice, the term autonomous is used for devices or systems having some sort of independent control or intelligence. However, none of those devices or systems can be fully considered as autonomous. On this debate, there is a general agreement in the computer science community that an artificial agent can be considered as autonomous if (Allen et al., 2000):

  • It can learn from its experiences and act accordingly in the future.
  • It continues to do so over a long course of time.
  • It does so without any intervention or direct control by human beings or any other agents.

The first feature entails an agent’s capability to modify and rewrite its programmed functions and develop new functions as required (Scherer, 2015). Accordingly, with continued modifications and additions, the degree of autonomy keeps on increasing (Bakey, 1998). The second feature deals with an agent’s ability to act autonomously in a given environment over a certain time period. Here, there is no temporary limit for autonomy, and necessarily, mere execution of all the programmed functions and repeating the same set of actions over and again are not sufficient to be deemed autonomous (Hopgood, 2001). The third feature identifies an agent’s ability to perform the required actions without human intervention. Hence, it must be able to define its internal states and control its actions (Sycara, 1998; Asaro, 2016).

Autonomy for artificial agents: Strong and weak

Autonomy for artificial agents can be understood in two senses: strong and weak. An agent has strong autonomy when it does not only choose the ways to achieve a goal but the goal itself. An agent has weak autonomy when it is only capable of choosing various alternatives to achieve a particular goal. These goals are predefined and predetermined by human beings or other agents (Calverley, 2008). An ideal autonomous entity must be able to shape its life and determine its course appropriately. For this to happen, an artificial agent must have strong autonomy as it would be quintessential for the recognition of legal personhood.

What’s next?

Once an autonomous system is recognised as a legal person, it becomes subject to all the existing laws and regulations, just like us. Recognition of human-like autonomy for autonomous systems will impose certain restrictions while at the same time, granting a set of rights and corresponding duties. If legal personhood is ascribed to autonomous systems, they will become a part of the classes of legal persons, and we will have a plethora of legal issues to deal with. In the upcoming parts, I will be writing about whether we can accommodate autonomous systems in our existing legal framework. If not, how can we regulate this new class of persons?


Interested in contributing to our blog and knowledge base? Write to us at contact@cyberblogindia.in and elaborate on how you can help us in creating a safer cyber space.

Featured Image Credits: Image by Gerd Altmann from Pixabay


References

Allen, C., Varner, G. & Zinser, J., 2000. Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, Volume 12, pp. 251-261.

Arsiwalla, X., Freire, I., Vouloutsi, V. & Verschure, P., 2019. Latent Morality in Algorithms and Machines. In: U. Martinez-Hernandez, et al. eds. Conference on Biomimetic and Biohybrid Systems. Cham: Springer, pp. 309-315.

Asaro, P., 2016. The Liability Problem for Autonomous Artificial Agents. AAAI Spring Symposium Series, pp. 190-194.

Bekey, G., 1998. On Autonomous Robots. The Knowledge Engineering Review, 13(2), pp. 143-146.

Calverley, D., 2008. Imagining a Non-Biological Machine as a Legal Person. Artificial Intelligence and Society, 22(4), pp. 523-537.

Coleman, K. & Arete, A., 2001. Toward a virtue ethic for computational agents. Ethics and Information Technology, Volume 3, pp. 247-265.

Covrigrand, A. & Lindsay, R., 1991. Deterministic Autonomous Systems. Artificial Intelligence Magazine, 12(3), pp. 110-117.

Dennis, L. & Slavkovik, M., 2018. Machines that know right and cannot do wrong: The theory and practice of machine ethics. IEEE Intelligent Informatics Bulletin, 19(1), pp. 8-11.

Hemment, D. et al., 2019. Toward Fairness, Morality and Transparency in Artificial Intelligence through Experiential AI. Leonardo, 52(5), p. 426.

Indurkhya, B., 2019. Is morality the last frontier for machines?. New Ideas in Psychology, Volume 54, pp. 107-111.

Kant, I., 2007. Critique of Pure Reason. London, England: Penguin Classics.

Laukyte, M., 2012. Artificial and Autonomous – A Person?. Social Computing, Social Cognition, Social Networks and Multiagent Systems, pp. 66-71.

Scherer, M., 2015. Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology, Volume 29, pp. 353-355.

Schneewind, J., 1998. The Invention of Autonomy: A History of Modern Moral Philosophy. Philosophy, 74(289), pp. 446-448.

Smithers, T., 1997. Autonomy in Robots and Other Agents. Brain and Cognition, Volume 34, pp. 88-106.

Sycara, K., 1998. The Many Faces of Agents. AI Magazine, 19(2), pp. 11-12.