This is the third article in our series on the personhood of autonomous systems. In the first article, we explored the concept of autonomy from legal, philosophical, and technological points of view. We followed this discussion by talking about Kant’s concept of autonomy in the second article. Here, we will make an attempt to understand how autonomy is perceived in the computer science domain.
Autonomy v. Automation
You will often see individuals correlating autonomy with automation. However, both of these mechanisms can be performed separately without human interference. Automation functions without any human intervention, but it is devoid of taking decisions on its own. It will substitute daily actions with software and automated programmes that will require regular checks by human beings out of necessity. On the other hand, autonomy in computer systems intends to follow and match the psychological actions of human beings. The following three factors will characterise autonomy in computer systems:
- Recurrence of communication with human beings required by the machines to perform actions efficiently.
- Competence of machines to perform actions, regardless of natural concerns and human intelligence factors.
- Decisive levels of machines concerning functional decisions for achieving a goal.
Levels of autonomy
Some researchers believe that autonomy in computer science is delegated. With ten levels of autonomy, the difficulty in implementing autonomy increases with the increasing number of levels.
- At level 0, an autonomous device is merely automated.
- From level two to four, the allotment of decision-making competency between machines and humans becomes prominent.
- At level 9, a machine receives the fundamental decision-making powers, with humans having special reserved powers.
- At last, at level 10, a machine becomes completely autonomous.
Autonomy in practice
Autonomous systems are capable of amassing various types of information. This includes techniques to achieve their goals or getting familiar with a dynamic environment without human intervention. In the computer science domain, the theory of autonomy is analogous to the human’s nervous system. Our nervous system manages a wide range of body functions with negligible inputs from us. It is capable of arrangement, development, restoration, and protection on its own. The idea behind autonomous systems is to make them competent to control themselves for performing higher goals than system administrators. Autonomy is often an encouraging advancement to reduce human efforts and capital requirements.
Sooner or later, autonomous systems will become a part of our surroundings. They will accustom themselves to changes in the environment and perform spontaneous actions for accomplishing their tasks. While there have been multiple attempts to evaluate and determine the level of autonomy in AI devices, prospective researchers can still explore this area. One can construe autonomous systems as task-oriented and spontaneous with no supervision by any external agent. Such systems also come with the capability to interact and react based on their previous experiences.
Autonomous systems and legal personality
The legal liability of individuals is not based on strong philosophical grounds such as intent or free will. Certain technical restrains bar the lawmakers from parting away with the traditional concept of what liability consists of. It is imperative to note the distinction between the legal liabilities of individuals and autonomous systems. One practical justification is the non-traceability of such actions to human beings. As such, the liability of individuals becomes irrelevant in the context of autonomous systems. In traditional jurisprudence, there is a necessity of humans behind the veil of legal persons. When it comes down to autonomous systems, we will need real characters. No legal deception here can aid the quest for justice.
At present, the parallels between autonomous systems and legal personality are far from ideal. We need to answer if these autonomous devices can be regarded as authentic operators in the eyes of traditional psychological concepts? Here is an illustration that can help understand the problem.
Consider that there is a co-worker in your office. Their name is X. They come across as a knowledgeable and fun-loving person. They are decent in interaction with people and maintain their composure in difficult situations. Later, you come to know that X is nothing but an autonomous system. The creators have designed it so well that it can encounter all types of communications and provide subsequent responses. The level of sophistication is such that it would never display superiority in front of human beings.
After knowing this fact, will you treat X in the same manner as before? The most likely answer is no. For most of us, X will stop being a legal or moral agent. In other words, it will just be an autonomous system working efficiently with the help of well-defined algorithms.
This illustration of X calls out for a discussion certainly. Humans tend to attribute intentions to inorganic things. However, we are less likely to perceive objects systematically. As children grow old, their instincts of considering machines as alive objects slow down. This further substantiates that autonomous systems may fulfil the requirements of autonomy as perceived in computer science, but they cannot fulfil the requirements of autonomy as promulgated by Kant yet.