Will we ever give rights to robots?
From “The Pigeon”, a steam-propelled dream of the Greek mathematician Archytas in the 4th century BC, to “Ameca” by Cornwall-based manufacturer Engineered Arts in 2024, our dream of creating robots has become a reality. However, we often deliberate the possibility of a future where AI will supersede human intelligence and emotion. Is this a realistic possibility? Will we give the same sets of rights that we have to robots?
Human rights and robots
BJ Copeland’s entry in Britannica defines AI as follows:
Artificial Intelligence is the ability of a computer or computer-controlled robot’s ability to perform tasks commonly associated with the intellectual characteristis of humans, such as the ability to reason. Although there are as yet no AIs that match full human flexibility over wider domains or in tasks requiring much everyday knowledge, some AIs perform specific tasks as well as humans.
Human rights are rights inherent to all human beings, regardless of race, sex, nationality, ethnicity, language, religion, or any other status. Considering the rapid technological developments, we are not very far from AI systems becoming sentient. When this happens, the term robot rights may not be a faraway possibility. However, based on the above definition, one can counterargue that AI systems are not independent decision-makers, so they should not have rights. While this is true, one must consider that AI systems’ capabilities will only improve in the future.
Granting rights to robots
A 2015 research paper by Lantz Miller provides a panoramic view and explanation of the vital questions of whether automata may match or even supersede human capabilities. His answer is based on the idea of ontology, and he explores if human beings are under an obligation to provide rights to robots. He gives two significant points to negate the concept of conferring rights on robots.
First, he dismisses the idea that moral progress or maturity, and a broader sense of morality could lead to robots gaining human rights. At the very core of it, robots are elementally different from human beings. There are mere tools for the execution of particular human tasks.
Second, he argues that the fundamental difference between humans and robots creates a considerable gap. He argues against granting rights to even highly intelligent robots, even if they rebel against humans for the same.
To support his argument, he creates three predicates to discuss his perspective on the ontological differentiating ground between humans and automata. These three predicates are as follows:
- A(x): X has come into existence as a human being, i.e., all members of the homo sapiens species and animals.
- C (x, y): Some entity Y has constructed X.
- P (x, y, z): Some entity Y has constructed X for purpose X.
The easy derivation that Miller seeks from the above predicates is that human beings or any other beings that “came into existence” were rather purposeless. On the other hand, automata is created for the very need to fulfil a purpose. Humans have no purpose in this context, meaning they have existential normative neutrality. However, there is one major fallacy in this theory. He concludes his discussion on predicates without a premise. Predicate A puts animals and humans into one category. In the practical world, the rights that Miller discusses are an essential virtue of coming to being. However, they are different for both humans and animals.
His reasoning again, though rational in its ways, gives rise to the notion that robots should be treated as mere objects. This approach does not regard the abilities they possess. With a gradual increase in human-like intelligence, is this the right approach?
If not all, how about some?
The primary purpose of social robots is to interact and communicate with humans on a social level. At the same time, there are home appliances that serve our purposes with the help of pre-fed data. A layman can quickly point out that the purpose of both instruments is to serve the needs of human beings. For instance, PARO is a social robot meant to act as a companion. Then, there is a pre-programmed washing machine that runs based on the total load and type of clothing material.
There is a difference here. The purpose is not the end goal we seek to achieve. Scheutz points out that “social robots are specifically designed for personal interactions that involve emotions and feelings.” This statement draws the most crucial difference between social robots and other devices. At present, social robots are yet to achieve those human-like capabilities where we consider them to be sentient beings.
Can we define a basis for conferring rights?
Irrespective of whether you stand on the debate of granting rights to robots, defining a basis for conferring rights will still be relevant. Coeckelbergh, in his paper, provides two potential views on granting rights to robots.
(1) Property Account
If a robot can successfully exhibit that it possesses one or more essential properties that make it the subject of a life, we can grant rights to it. These essential properties include consciousness, intentionality, rationality, personhood, autonomy, and sentience. This theory looks for human-like attributes to confer rights onto robots. However, human-like attributes also involve infantile intelligence, naivety, and zero ability to perform tasks due to cognitive disabilities. Why are we only looking towards the good side of the picture while discussing human-like attributes for robots? If we have to consider human-like attributes as the deciding factor, we must understand that it also includes the ills of man. Are we expecting robots to live up to the degeneracy of mankind as a standard?
(2) Relational Account
For this view, Coeckelbergh argues that interactions should be considered a basis for conferring rights. He proposes not looking within the robot but judging its eligibility for rights based on their social interactions. This approach prioritises how the robot materialises specific attributes and looks at the “appearance” of the robot to judge its likelihood of securing moral obligation. This approach single-handedly seeks to answer the so-called quandary between realism and idealism, pointing out that consciousness is inevitably focused on the object and asks the question of rights from what the mind perceives. Thus, it is neither context-independent nor subject-independent.
Conclusion
As we have seen so far, no single approach is free from faults. Gerdes was apprehensive of a future where robots seen in the eyes as equals become vessels of relations that go beyond the norms of human understanding. The threat and atrocities posed by looking at robots from a relational bond perspective are that we forget the fundamental differences. There are several questions that remain unanswerable due to the unpredictable nature of the future. All said and done, is it just a dialogue for a dubious future? Or, as Ultron says, “How is humanity saved if it is not allowed to evolve?”