Automated Recommendation Systems and Decisional Privacy
Are you planning to binge-watch a newly released TV series or rewatch K3G for the 50th time? But if you have not chosen what to watch, Netflix’s Play Something feature might come to your rescue. This newly added feature allows Netflix to play a title based on users’ interests and prior watching behaviour. Unfortunately, or fortunately, Netflix’s Play Something is not the only one that exists. Remember Spotify’s Discover Weekly? This feature provides the users with a curated playlist based on their listening of the past week. With bright graphics and a well-customised and personalised playlist, the feature delivers 30 new songs every Monday. Using algorithm recommendation systems, internet companies have found a way to answer your questions about what is trending, where to eat, and what to watch and listen to.
How does an algorithm recommendation system (ARS) work?
Most internet platforms today use a recommendation system to provide their users with personalised suggestions on content, purchases, groups, and connections, amongst others. Platforms rely on algorithm-based recommendation systems to help users deal with the number of options available and engage with them. The recommendation systems require personal information from users to predict their preferences and curate a personalised feed for each user. For example, Netflix’s recommendation system works on viewing, history, ratings, genres, categories, and preferences of other users with similar interests. Additionally, a recommendation system can also consider the time of the day a user watches Netflix, the device being used, and the duration of Netflix usage.
How do they violate your privacy?
As more and more platforms engage recommendation systems, there is a growing concern about the enormous power such systems assert over the other users. Karen Yeung notes that by configuring users’ informational choices through algorithm analysis, these systems nudge users in the directions preferred by the choice architect through subtle, unobtrusive, and yet extraordinarily powerful processes.
On the other hand, internet platforms contend that these recommendation systems enhance user experience and provide convenience. However, there are numerous ethical and privacy risks associated with such systems. The companies collect a lot of personal information on their users, from their political leanings to the most recent purchase. They can also engage with their users using the collected personal information. Eventually, this translates into monetary benefits for the companies as they can sell the number of eyes engaged to the advertisers. In addition, the deployment of such automated recommendation systems often creates a filter bubble.
Internet activist Eli Pariser coined the term filter bubble in 2011 in his book based on the same title. He defines filter buttle as an algorithm’s decision based on our past online habits. He further argues in his book that algorithms create a unique universe of information for each one of us, which fundamentally alters the way we encounter ideas and information.
Creating such filter bubbles hypernudges the users in a direction that the system selects. This limits their choices and their capacity to make informed decisions. As a result, filter bubbles end up violating the users’ decisional privacy. Decisional privacy allows individuals to make free and informed choices about themselves without unwarranted interference. Such unjustifiable interference helps a platform in accessing user behaviour and actions. They have the power to influence, change, or nudge users’ decisions without their prior permission. At the same time, filter bubbles prevent individuals from accessing information that contradicts their beliefs. ARS systems serve the users with a homogenous experience, limiting their chances of accessing different viewpoints. Nonetheless, these systems end up reinforcing the already existing beliefs.
In the Puttaswamy case, the Indian Supreme Court noted that the right to privacy includes decisional autonomy. The court further observed that decisional privacy allows individuals to decide about their own bodies without unjustifiable interference. As the platforms’ reliance on ARS increases, it becomes important to prevent the power to influence and shape opinions these systems assert on the users and prevent them from making informed choices. Internet platforms must offer transparency and accountability on how their recommendation systems are built, how they function and offer personalised recommendations. They should outline the situations where they use systems like ARS in a way that a layman can comprehend.
As we move towards the age of Artificial Intelligence, protecting decisional privacy will be a shared responsibility of various stakeholders. If a platform offers a recommendation or suggestion to the user, it must tell the reason behind the suggestion. Most importantly, the platforms should provide users with an easy way to know how their data is collected and processed by the recommendation systems. Moreover, policymakers should incorporate necessary legal controls for ensuring transparency and fairness in automated processes.