Takeaways from the EU’s AI Act and the ASEAN Guide

The Cyber Blog IndiaLaw

Takeaways from the EU's AI Act and the ASEAN Guide

The dystopian future depicted in sci-fi movies may be closer than expected. The advent of artificial intelligence (AI) has altered everyday society. AI tools can potentially influence democratic elections through deep fakes and micro-targeting propaganda. They can also exacerbate tech warfare through AI militarisation and automated weapons. Though AI has its negative impacts, it has found its positive applications in various sectors. The AI industry needs efficient governance in India to reap AI’s maximum benefits while keeping the tech players in check.

However, regulating AI is a monumental task in itself. This article focuses on the EU’s Artificial Intelligence Act and the ASEAN AI Guide to understand how countries like India can formulate their AI regulatory framework to facilitate robust AI governance. The EU’s Artificial Intelligence Act is legally binding, while the ASEAN Guide only provides a voluntary framework for organisations.

The EU’s Artificial Intelligence Act

The European Union has been at the forefront of regulating technology for quite some time now. Adopted in March 2024, The EU’s Artificial Intelligence Act (or AI Act) is the first of its kind to regulate AI. This act provides a base for a harmonious and peaceful coexistence of humans with machines.

1. Regulatory Approach

It distinguishes AI applications by risk levels:

  • Unacceptable Risk: These systems are incompatible with EU values and fundamental rights. Examples include social scaring, subliminal manipulation, predictive policing, etc.
  • High Risk: These systems will be the most regulated AI systems allowed in the EU market. Examples include AI in hiring and critical infrastructure. These systems have the potential to negatively affect people’s health and safety.
  • Limited Risk: These systems must be transparent, as there is a risk of manipulation or deceit. The tool must inform a human user that they are interacting with AI. Examples include chatbots.
  • Minimal Risk: Systems that are not covered in the above-mentioned categories, such as spam filters, will be covered here. While they do not have mandatory obligations, they are suggested to follow general principles of human oversight, non-discrimination, and fairness.

Beyond this classification, there is a different regulatory approach for general-purpose AI (GPAI) systems. The Act requires all GPAI providers to provide technical documentation and usage instructions, comply with the Copyright Directive, and publish a summary of the content used for training.

Furthermore, the Act requires companies to establish a risk management system for high-risk AI systems through Article 9. As illustrated in Paragraph 2 of Article 9, this system should be continuous and iterative throughout the system’s lifecycle. In Article 10, a high-risk AI system must have a framework for data governance and management practices. This should cover relevant design choices, the data collection process, the data origin, the data processing operations, assumptions, and the examination of possible biases, among other requirements. Article 50, in Chapter 5, lists transparency obligations for providers and deployers of certain AI systems. The obligations require providers to inform users how data is collected, used, and stored when interacting with AI systems.

2. Key Takeaways

The most substantial contribution of the EU Act lies in establishing clear lines of accountability. At least in the European Union, once the Act comes into effect, gone will be the days of nebulous responsibility when issues arise with AI systems. The Act ensures clear liability for both developers and deployers of AI systems. Robust record-keeping requirements would enable authorities to trace the development and deployment process and facilitate investigation in cases of malfunction and misuse.

However, the EU Act is not without its complexities. Striking the right balance between fostering innovation and protecting fundamental rights is delicate. Yet, the Act’s potential benefits are undeniable. As the EU strengthens its position as a global leader in tech regulation, the ripple effects of this development are far-reaching, potentially serving as a blueprint for AI-related legal developments around the world.

ASEAN Guide on AI Governance and Ethics

The Association of Southeast Asian Nations (ASEAN) published a voluntary AI Governance and Ethics framework. While the EU’s AI Act is legally binding, the ASEAN Guide seeks to standardise the regulation of AI systems in its member countries. Implementing the requirements in this Guide does not replace or supersede existing or future AI laws.

1. Guiding Principles

There are seven guiding principles for this Guide:

  1. Transparency and Explainability
  2. Fairness and Equity
  3. Security and Safety
  4. Human-centricity
  5. Privacy and Data Governance
  6. Accountability and Integrity
  7. Robustness and Reliability
2. Framework

The framework is structured across four key areas:

  1. Internal governance structures and measures
  2. Determining the level of human involvement in AI-augmented decision-making
  3. Operations management
  4. Stakeholder interaction and communication

The first key area advocates either setting up or adapting an internal governance structure that incorporates values, risks, and responsibilities related to algorithmic decision-making. The second key area revolves around a methodology to help organisations define their risk appetite. Essentially, this helps identify acceptable risks and appropriate levels of human involvement. There are three possibilities for human involvement: humans in the loop, humans over the loop, and humans out of the loop. The third key area deals with issues to consider during the development, selection, and maintenance of AI modules. Lastly, the fourth key area provides strategies for communicating with an organisation’s stakeholders and managing their relationships.

Conclusion

Merely replicating the EU’s AI Act or the ASEAN Guide will not solve any substantial problems. The regulatory landscape in India will come with its own hurdles. The government needs to decide on how much state intervention it wishes to keep to ensure its role does not hinder innovation and technological growth. Apart from having a dedicated AI legislation like the EU, India can consider having a standard framework for developing AI tools, platforms, and systems. Taking inspiration from the ASEAN Guide, the regulatory authorities can require the implementation of AI guidelines right from the design phase. Moreover, India can take inspiration from the EU’s AI Act by setting up a central regulatory body dedicated to AI governance. This new body should be functionally independent and contribute to developing standards and testing procedures.


Vishwaroop Chatterjee and Madhu Murari, undergraduate students at the Rajiv Gandhi National University of Law, Patiala, contributed this article to our blog.