Terms and Conditions may apply: The Contours of Unregulated AI

In this first part of the article, I discussed Trump’s “One Big Beautiful Bill” (OBBBA) and its implications for AI governance. This part explores what an unregulated AI would look like and its contemporary relevance. In 2020, a group of students in the UK chanted the slogan “F*ck the algorithm” in protest against the decision to use AI to evaluate their papers, which ultimately led to a spectacular failure of automated judgment. Technology has undoubtedly permeated nearly every aspect of human lives, with AI playing a central role in many professional and social domains.
Artificial Intelligence achieves its intelligence at its foundation through training on vast amounts of data. Growing research reveals that much training data is entrenched in heteropatriarchal structures. Since machine learning algorithms learn patterns and make decisions based on training data, any inherent bias inevitably shapes system outputs. When biased data informs algorithmic processes, it perpetuates existing social inequalities, like gender discrimination.
The question, “Is it because I am a woman?”, has been asked by women all around the world for centuries in the face of gender discrimination. This resonates deeply among data scientists and scholars who investigate the implications of acute, unregulated reliance on artificial intelligence for critical decisions. Undoubtedly, dependence on machines for critical decisions has increased significantly. This creates a farce of algorithmic discretion with little consequence when systems fail.
A feminist critique of AI’s gender-biased nature provokes discussions against male chauvinism while underscoring urgent needs for marginalised community representation in male-dominated machine learning platforms. AI systems mirror the biases deep-seated in society based on gender, race, sexuality, and age embedded in training data. Harvard Business Review highlighted this through word associations like “a father is to a doctor and a mother is to a nurse,” demonstrating how algorithms perpetuate stereotypical gender roles.
Some Evidence
I asked ChatGPT the same question. Check the following screenshots.

Images generated by ChatGPT in response to my queries
History of Feminist AI Discourse
The internet has evolved into a democratic space, promising intersectional integration beyond traditional hierarchical norms. Academic discourse has progressed from abstract theorisation to real-world intervention at the nexus of technological innovation and gender analysis. This intersectional techno-feminist lens makes disrupting conventional AI paradigms imperative, particularly given increasing reliance on unregulated technologies. Feminist Artificial Intelligence has evolved from academic critique into a transformative initiative that embeds equality into the foundational design and deployment of AI. The central objective is fostering inclusive, diverse, and structurally equitable artificial intelligence systems that challenge existing power dynamics embedded in algorithmic decision-making.
This endeavour isn’t novel; its conceptual underpinnings trace back to 1980s feminist science and technology methodologies. British computer scientist and historian Alison Adam has vehemently critiqued AI’s foundational processes. She identified them as mirroring society’s conservative and exclusionary foundations. She asserted that societal exclusions are now reflected as digital exclusions, highlighting how technological systems perpetuate existing power structures.
Donna Haraway, a key voice in feminist science studies, has also challenged the traditional ideas on producing knowledge and stressed the need to make technoscience an inclusive domain. It has been well established that knowledge and learning are not neutral prerogatives; the one who controls their dissemination has the power to perpetuate their views. Big platforms like UNESCO have highlighted the need to make technology inclusive, or it risks being as discriminatory as ever. The European Union has directly said that technology reflects the values of its developers. More diverse teams working in the development of such technologies might help identify biases and prevent them.
The Idea of “Intelligence”
What is intelligence? Whose intelligence are we relying upon to make the machines intelligent? The rapid expansion of digital technologies has profoundly impacted nearly every aspect of life, including governance, communication, and production systems. This is often referred to as the “fourth industrial revolution“. The wave has ushered in new technologies integral to public and private sectors, including healthcare, education, commerce, and finance.
The question of what constitutes ‘intelligence’ remains unsettled. If today’s bias-laden AI is deemed ‘intelligent’, critical interrogation of training data becomes imminent. That we continue to label such flawed systems as ‘intelligent’ only underscores how patriarchal norms and historical power structures shape the very concept of intelligence. These same structures exploit social inequalities, naturally constructing ‘intelligence’ to reflect their values, perspectives, and positions of power.
In 2018, Amazon was testing a new algorithmic tool to carry out the recruitment process. This tool ended up being so exclusionary that Amazon discarded its ‘sexist algorithm’. It realised that the algorithm was filtering out people based on gender, as training data primarily consisted of men. This is a significant example of how AI biases can harm humans, as noted in the 2023 report on AI bias by the National Institute of Standards and Technology.
In another example, AI in healthcare is a big industry, omnipresent in diagnostics and pharmaceuticals. The healthcare field is profoundly shaped by gender bias. For decades, the clinical trials only consisted of men and prioritised male bodies. Over time, this resulted in a large amount of data only catering to the male bodies, creating a field that started diagnostic delays and, in extreme cases, misdiagnosing them.
Another significant yet under-discussed provision of the OBBBA is its restriction on federal courts‘ authority to enforce contempt rulings against government officials. This limitation poses a serious threat to the rule of law, particularly in areas such as data privacy, digital rights, and cyber security accountability. The bill risks insulating government agencies from consequences by curbing judicial enforcement mechanisms, potentially undermining long-term efforts to establish effective oversight in the tech sector.
Who does it benefit then?
AI infrastructure providers and defence contractors favour this centralised regulatory framework. They argue that navigating 50 laws proves challenging. Furthermore, they say that temporarily suspending state-level AI regulation promotes innovation and enhances competition against Chinese companies. However, this strips state and local governments of their regulatory authority, forcing them to remain passive despite evolving realities. The bill’s divide intensifies as competing fiscal ideologies and divergent perspectives on technology governance emerge.
GenAI giant OpenAI applauded this decision, but their stance is also biased. The company has recently signed a $200 million contract with the US Department of Defence to utilise generative AI for military purposes. The potential for algorithmic manipulation to distort political discourse or erase ideological perspectives underscores the threat of AI-driven political erasure. A recent example is Ghiblification, wherein OpenAI’s DALL-E and Midjourney allowed users to replicate Studio Ghibli’s whimsical art style. The White House’s X account contributed to this trend by posting a picture of a crying detainee. This quickly garnered a lot of attention, and the account faced severe backlash as a result. While in India, a Ghiblified image of Babri Masjid Demolition went viral on the platform.
Conclusion
Transparency is not a luxury but a prerequisite for legitimacy in a world with inherent distrust towards AI. This step mostly feels like the US is veering towards a regulatory vacuum. Without careful regulation, we’re not writing laws, we’re scripting our Black Mirror episode. Regulating AI is imperative for safety and efficiency, as well as safeguarding democratic values and social equity. Unchecked AI systems can entrench systemic biases, exacerbating gender disparities, amplifying racial discrimination, and reinforcing class divides. While simultaneously concentrating power in the hands of a few dominant tech entities. Without robust oversight, AI risks becoming a tool of digital fascism, subtly shaping thought, silencing dissent, and deepening societal inequities under the guise of innovation.
