UK’s Age Appropriate Design Code: A code of practice for online services
The United Kingdom’s Parliament passed the Data Protection Act in 2016, and it came into effect on the same day as the EU GDPR. The Data Protection Act of 2018 (“the act” or “UK DPA”) is Britain’s implementation of GDPR. Recently, the Information Commissioner (“ICO”) published the Age Appropriate Design Code on August 12, 2020. The code will come into force on September 02, 2020, with a twelve-month transition period.
Section 123(1) of the UK DPA requires the Information Commissioner to implement a code of practice for age appropriate designs of services which are likely to be accessed by children. The sub-section reads as follows:
“(1) The Commissioner must prepare a code of practice which contains such guidance as the Commissioner considers appropriate on standards of age-appropriate design of relevant information society services which are likely to be accessed by children.”
The same provision defines the phrase age appropriate design as “the design of services so that they are appropriate for use by, and meet the development needs of, children.” Prior to the publication of this code by the ICO, the State Secretary laid the code before the Parliament under Section 125(1)(b) of the act on June 11, 2020.
Need for this code
According to the statistics available with the ICO, children account for one-fifth of the total number of internet users in the United Kingdom. In one of their surveys to find about data protection problems faced by the UK public, they found out that child data protection is the second-most highlighted concern. This survey’s results resonate with the findings of a London School of Economics’ research. Further, this code is implemented in consonance with the UN Convention on the Rights of the Child requirements which advocate for special safeguards for children in all spheres of their life. While similar reforms are being considered in the USA and other European countries, it is expected that this code will pave the way for other countries to take a step forward for children’s welfare and their rights on the internet.
About the code
The code aims to protect the interests of children within the digital world, instead of protecting them from it. It prescribes fifteen loosely defined standards for age appropriate design. One of the intentions of this code is to ensure that the principle of privacy by design is implemented with a particular focus on the country’s younger generation. For example, if a child changes their settings, the concerned service provider will inform them of the consequences, how their data will be used afterwards, and what they can do to prevent or allow the same. While the organizations are given a transition period of twelve months, this code is in line with the requirements of the General Data Protection Regulation (GDPR) and the Privacy and Electronic Communications Regulations (PECR).
This code divides the age range in the following manner:
- 0-5: Pre-literature and early literacy
- 6-9: Core primary school years
- 10-12: Transition years
- 13-15: Early teens
- 16-17: Approaching adulthood
Organizations that do not cater to any of the above-mentioned age ranges need not consider that age range while designing their services. However, such organizations need to specify in their privacy policy that they have not designed their services for those age range(s). In line with UNCRC’s objective, the code also mentioned considerations for disabled children. It adopts a risk-based approach to the processing of personal data of children and lists the following factors that must be considered:
- Types of data collected;
- Volume of data;
- Intrusiveness of profiling;
- Whether decision-making or similar actions follow after profiling; and
- Whether data is being shared with third-parties.
Determining a user’s age
Though the code does not sponsor any particular method to determine a user’s age, it does provide a list of possible ways for the same, but this list is not exhaustive and non-binding in nature. I have discussed the listed methods below.
1. Self-declaration
As the name itself suggests, the service will ask a user to enter their age during the registration process. This method may not be useful as a user can easily lie about their age. Hence, the ideal use cases for this method involve low-risk processing services.
2. Artificial Intelligence
A service may use artificial intelligence (AI) to determine a user’s age based on their interaction with the service. If an organization implemented an AI-based system for this purpose, it must ensure that:
- Minimum amount of personal data is being collected.
- Users are aware that an AI-based system will check their age.
- The personal data collected for age determination is not used for any other purposes.
3. Third-party age verification services
This method will involve the sharing of a user attribute with a third-party for determining a user’s age. The third-party age verifier will only respond yes or no to the service provider’s questions. This method reduces the amount of personal data collected by an organization. Similar to the last method, an online service must inform its users about the involvement of a third-party age verifier.
4. Account holder confirmation
This method proposes implementing joint accounts for parents and children. By utilizing this method, the service provider can store a parent’s information, and they do not need to store a child’s personal data.
5. Technical measures
This method proposes implementing sufficient technical measures to disallow a user from entering the incorrect age. Such technical measures can include:
- Closing or deactivating under-age accounts
- Neutral presentation of age declaration screens
- Preventing a user from immediately resubmitting a new age if they are denied access because of their age
6. Hard identifiers
Hard identifiers include passports, government-issued identity cards, and similar identification mechanisms. However, it should entirely depend on a user whether they want to provide such hard identifiers or not. This is mainly because children do not have easy access to such formal documents. Involvement of hard identifiers in the age determination process can also infringe the privacy of adults associated with a minor. In case a service is asking for a hard identifier, it should not be the only option available.
Conclusion
The age appropriate design code establishes a flexible set of requirements for improving online safety of children. If an organization does not have any certainty for the age of its users, the design should be implemented by considering minor users at large. The code also expects that service designs should be transparent about the rights each age group has. Hence, I believe that this code will prove to be impactful for the online safety of children, while at the same time, it will instil a sense of responsibility in them concerning their data. The explanatory memorandum to the Age Appropriate Design Code 2020 may help you in gaining detailed insights with respect to the events leading to the implementation of this code.
(In the next article, I will be discussing the standards prescribed in this code.)
Featured Image Credits: Computer photo created by standret – www.freepik.com