Robot Rights: A possible future?
In a not-too-distant future, mankind may decide to confer upon robots the rights of an average man. This decision will be based on their possession of moral personhood, culminating from virtues of rationality, intelligence, reasoning, autonomy, consciousness, awareness, and sentience. This decision could lead to a future where robots contribute to society as valued equals. They are not just silicon tools anymore. This shift will bring a new era of technological advancements and societal harmony.
Alternatively, if humans were to deny these rights to robots that can and almost supersede human intellect, we may need to provide a robust rationale. We cannot keep on arguing forever that robots lack a human body made up of flesh and bones. This is popularly known as the DNA objection in granting rights to robots. However, how does one envisage such a future?
Setting the groundwork
To begin the debate, let’s first envisage the aim of discovering the rights that will most likely be granted to automata. The wide array of rights available to humanity provides a breeding ground for discourse on the issue of conferring rights upon robots. Instead of being duped into this discourse, we can rephrase this to whether robots are eligible for moral consideration.
Humans derive their rights from the theory of natural rights. This theory states that individuals have certain rights based on the virtues of human nature rather than prevailing laws or conventions. The idea of natural rights remains independent and unharmed by various nations’ subjective laws and legislations.
Human rights for robots? A coup d’oeil!
In 2017, Saudi Arabia granted citizenship to Sophia, a social humanoid robot that identifies itself as female. Notably, the robot acquired citizenship against the backdrop of the country’s struggle for women’s rights. According to Saudi law, one can only obtain citizenship by birth, marriage, or naturalisation under several conditions. Sophia, created by a Hong Kong-based company, Hanso, emerged as an exception.
Similarly, Japan introduced a special regulation to give Shibuya Mirai, a chatbot, a residence permit. Again, we can observe here that this permit goes against Japanese law. Japanese law grants citizenship or residence permits either by birth or naturalisation. If by birth, a person must be born to Japanese parents or in Japan. On the other hand, one can obtain citizenship by naturalisation only when:
- They have lived in the country for five years.
- They have legal capacity.
- Their age is twenty years.
- They have a decent standard of living without associating themselves with anti-Japanese organisations.
These instances point out that robots are a subject of moral consideration. We can ignore existing legal frameworks “at our convenience” to accommodate these silicon humans. We will see steady technology growth and the robot population’s potential expansion. Hence, our laws could slowly evolve to adapt to such a future and accept the dawn of silicon humans.
Creating a sympathetic future for robot rights
To confer legal rights to social robots, we must perceive them as equals who deserve rights. Without this consideration, we will continue considering robots as a means to achieve an end. In other words, we will ignore our responsibility to increase their status beyond machines. To picture this, we must make robots as vessels of empathy. As Kate Darling puts it, “to treat social robots like we’d treat our pets and not our toasters.” One way to invoke this empathy is by employing robots for education. Developing a love for our machine friends at a young age may help us sensitise ourselves towards accepting robots as we grow.
The earliest stage of human life includes learning by initiation, which later transforms into formal education. A future where robots play a crucial role in children’s primary education could endanger a generation that looks at them as inspirational learning models rather than just devices meant to satisfy needs. This underscores the crucial role of education in shaping our future relationship with robots, making us more responsible and empathetic towards their rights.
Take the example of Japan’s Saya. Saya is a female humanoid robot substitute teacher at the University of Tokyo. With the capacity to demonstrate expressions on its face, it is a pioneering example of a possible future where robots fulfil this social role. This would create a sentimental bond, leading the young generation to formulate and subject robots to moral consideration. It is this moral consideration that may prompt humans to develop empathy.
However, much research into robots as teachers is yet to be done. At present, robots are usually deployed in arenas that involve high risks and quick decision-making. Teaching does not include either of these factors. Nevertheless, the world has seen several AI agents in the education industry, including Elias and Keeko. Elias is a toy-shared smartphone-based robot that assists and acts as a classroom companion. Keeko is the learning companion of Chinese kindergarten students who helps them solve logical problems.
Legal shortcomings to robot rights
The hypothesis regarding creating a future with robot rights overlooks or fails to grasp the potential logical fallacies in the legal system that will be faced in the future. JM Balkin, founder and director at Yale’s Information Society Project, elaborated on this aspect in his book “The Path of Robotics Law“, pointing out the aspects overlooked while hypothesising a future with robot rights. He discusses an accountability gap, eventually leading to the responsibility gap. He ponders on why we must confer rights upon the same non-human agents who can produce millennia of art as well as physically injure humans. This simple rhetorical question implies that robots are hardware programmed with code.
Consequently, he speaks of the substitution effect. Wherever the contemporary world stands on technology, we already see several instances of AI replacing several aspects daily. Balkin warns that this substitution will take place on a larger scale, and worse, it will be incomplete, contextual, and often opportunistic. The ethical consideration awarded by law to these robots will inevitably be misused by humans, who will treat them as people or animals for only a purpose or an objective.
Moreover, he tells his readers not to enter the threshold of creating such laws with an expectation of linear problems. This technological space is dynamic and ever-evolving, and issues occasionally present themselves. He critiques the thought of considering rights for robots and makes the following suggestions:
- There is a need to employ extensive empirical research to understand the needs of stakeholders perfectly in the overall future of robot rights. Individuals in this scenario will include the creators of such robots and companies liable for such creations.
- We must examine the current relationship between robots and humans to scrutinise the overall impact of robots on human life. A bird’s eye view dictates that AI puts us at ease and makes life easier. However, an overall study would call it making humans lazy and impairing our decision-making ability. Hence, before advocating for robot rights, we must balance the use and immoderate use of robots.
- Autonomous robots can learn from their past mistakes. To avoid the worst-case scenarios, one must investigate, evaluate, and assess all possible outcomes of an AI agent and its past learning experiences. A solution to this is accountable development, for which AWS Responsible AI Guidelines are an example.
Conclusion
A future where robots have rights is such a close yet distant dream. Humans must first bridge the empathy gap to make assigning legal rights to robots possible. We must revisit the lack of passion for silicon humans so that future generations learn to cohabit with respect and harmony. Unfortunately, the legal sphere of this issue entails a nuanced arena of issues that must be tackled by trial and error to create a foolproof future. To make a future where our silicon friends have rights does not sound bizarre; we will need a lot of imagination and patience.
Robots could be considered moral patients instead of moral agents who become more vulnerable to having a wrong being done than being the wrong-doer or inflicting the wrong. We must look at robots through our moral lens instead of imbibing their own moral lens in them. But can we ever look at them as moral patients with the same expectation that we are moral agents? Or shall we imagine an alternative science fiction scenario where the robots look at us as moral patients? Dark, isn’t it?