Recognising Autonomous Systems as Electronic Persons – Part 2
The first part of this article explored the definition of person, legal jurisprudence on this issue, and possibilities for recognising autonomous systems as legal persons. The second part explores potential challenges in recognising personhood for autonomous systems. The proposal for recognition of electronic personhood is a much broader debate than just discussing the subjectivity of terms. Are our legal systems ready to adopt a third category of persons? These are the questions we need answers for. In 2017, the European Parliament adopted a resolution on civil law rules on robotics. This resolution highlights the need to consider the legal and ethical implications of sophisticated robots and other AI-based applications without stifling innovation.
Consciousness
Consciousness, as a factor, has always been a significant setback for recognising an autonomous system as a legal person. A being functions with its consciousness, whereas an autonomous system performs its actions based on the algorithms written by its programmers. It does not have a consciousness of its own. We are yet to reach the age of superintelligence, where programmers can develop systems that develop consciousness.
Fundamentally, consciousness is not easy to describe. Experiential learning can explain human consciousness, but a third-person perspective is necessary to define artificial consciousness. The concept of consciousness is also linked with the concept of morality. People’s moral decisions depend on their consciousness, but autonomous systems can only rely on their algorithms. Further, the subjectivity of consciousness and morality makes it vaguer to define artificial consciousness. It is assumed that consciousness corresponds to how our mind is aware while making moment-to-moment decisions in our daily lives.
Anthropomorphism
With the development of the silicon world and the introduction of humanoid robots, humans have anthropomorphised robots and AI. For example, Saudi Arabia granted citizenship to Sophia, a humanoid robot that Hanson Robotics developed in 2017. Despite the humanoid robot’s algorithmic nature, her interaction with people evoked discussion about her legal status. This is not only true for humanoid robots; even non-humanoid robots like Roomba, a vacuum cleaner, evoke emotional responses from people.
At the 2019 Venice Art Biennale, Ralph Rugoff displayed an industrial robot programmed to ensure that the split liquid stays within a predetermined area. Called ‘May You Live in Interesting Times‘, the robot was visible through a transparent structure, almost like a creature captured and put on display. The internet indeed felt sorry for the robot. Therefore, anthropomorphism fosters empathy in human beings; at the same time, it blurs the line between algorithmic tools and sentient beings.
The risks and problems associated with the legal subjectivity expansion of electronic personhood are balancing rights, responsibilities, and liabilities for non-human entities. Will an electronic person exercise the same rights and take up duties and liabilities as a human? However, who will pay the damages if an autonomous car causes an accident? Or how autonomous vehicles will be held liable. Some researchers have drawn parallels with the corporations.
Reliance on Corporate Personality
In his book Treatise on Civil Law (2010), Galgano reported several disadvantages of conferring legal personhood on companies, which have not been adequately measured to date. The model of the corporate personality has also contributed to an improper understanding of the limitation of shareholder’s liability by concealing the unequal transfer of entrepreneurial risk to third parties. Some creditors can protect their interests by negotiating the risk within the company, as happens with a financial institution. On the other hand, some creditors cannot do so, as seen with victims of environmental damage, such as those affected by mining. The prevalence of the abstract model of subjectivity has given rise to a unitary reading of patrimonial autonomy itself and, consequently, of the limitation of responsibility, which is indifferent to the different creditors. In this context, the creation of electronic personhood may repeat the same problems.
As mentioned before, a robot is understood as a built system that displays only physical and mental agency. It is something that exists in the digital realm. That raises another question: how do we define rights and obligations for non-physical entities? Should embodiment not matter for legal recognition? Autonomous systems inherit their ethical considerations from their algorithms, which a programmer usually designs. It would be fair to consider that the autonomous systems’ ethical considerations may rely on the programmer’s fairness and bias. So, when an autonomous system makes a decision that negatively impacts someone else’s life, how do we determine the liability between the programmer and their system?
Recognising autonomous systems as electronic persons involves intricate legal, ethical, and societal considerations. How should a human expect to be treated by a robot? Humans are already forming emotional bonds with robots. That makes me think of a future where our society moves from the concepts of monogamy and polygamy to electro-gamy.
Conclusion
We are the audience in the grand theatre of technological progress; the spotlight shines on a peculiar cast: autonomous systems. These digital performers now stand at the centre stage, awaiting their cue for recognition as electronic persons. We lean forward, intrigued by the unfolding drama – a tale of rights, responsibilities, and the blurred boundaries between silicon and soul. The act unfolds the anthropomorphic metaphor weaving its spell where robots evoke empathy and curiosity.
The beings marvel at Sophia, the citizen-robot, and Roomba, the vacuum cleaner, with a personality. But behind the scenes, legal scholars scribble notes, pondering the implications of granting personhood to these digital souls. The act further unfolds its risks and problems, such as how we assign responsibilities when algorithms dance independently. Then, ethical considerations enter the scene. The audience is left to ponder how the legal scholars will delicately balance. Electronic persons are an assembly of codes, circuits, and silicon. The legal scholars must craft a finale that harmonises innovation with humanity.