Terms and Conditions may apply: Inside Trump’s One Big Beautiful Black Box

We have seen black mirrors enough to be familiar with the eerie, plausible future where AI governs love, memory, justice, and ruins everything. We are not too far off, cue Trump’s “One Big Beautiful Bill”. It begins like a trailer: ‘One bill to rule them all’. AI? Handled. Cyber threats? Neutralised. Free speech? Protected (apparently). It promises control, safety, and sovereignty. But behind the grandstanding and patriotic soundbites lies something far more sinister and less cinematic, a sweeping, messy legislative attempt to control the future with the blunt tools of the past.
The “One Big Beautiful Bill” promises safety, order, and American Greatness in the digital age. But when you ask who controls the code, who defines ‘truth’ and who gets erased at an algorithm’s whim, the beauty begins to fade into a black box. You can neither open nor escape. This article is divided into two parts. In this first part, I will discuss the technical aspects of the bill. The second part will examine why we must address these issues urgently.
Introduction
On May 30, 2025, the US House of Representatives passed the 1100-page-long One Big Beautiful Bill (OBBBA) by a marginal majority of 215-214. The bill sparked numerous debates as it reshaped the policy debate in the US. It garnered substantial support and a similar backlash, especially from high-profile figures in tech such as Elon Musk. The discussions around deregulation and digital innovation, which have been a focal point of the partnership between President Trump and Musk, have now taken a bitter tone, with Musk calling it “a disgusting abomination“.
While the bill has attracted criticism for its tax and immigration provisions, its implications for technology and cyber security are equally significant. This bill marks a pivotal shift in the federal government’s approach to digital infrastructure, national cyber defence, and AI governance. This transformation is more critical now than ever.
There have been various initiatives regarding global AI governance, such as the UN AI for Good Global Summit (2024), the G7 Hiroshima AI Process (2023), and, most recently, the AI Governance Summit in Paris (2025). The Paris Summit brought together the most prominent global leaders to discuss AI governance. These initiatives have remained fragmented, often lacking binding commitments or enforceable frameworks. In contrast, domestic legislative efforts attempt to consolidate AI governance within national borders, raising questions about its global coherence and accountability.
At the time of publication of this article, President Trump has signed OBBA into law. This could create a landmark shift in US tech policy, potentially influencing AI architecture in countries like India.
What does it mean for AI governance?
President Biden’s administration focused on establishing ethical guidelines and guardrails for AI. However, the new bill emphasises a liberal and free environment for the AI landscape. Undoubtedly, from an AI perspective, the most contested provision in the bill is Section 43201. It directs the Department of Commerce to allocate funds toward modernising federal information technology systems by integrating commercial artificial intelligence. Moreover, it also imposes a 10-year moratorium on state and local regulation of AI systems, which would affect more than 60 AI-related laws. It defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments”.
Notably, the Act outlines exceptions under which state-enforced AI laws may remain valid. The OBBBA’s exceptions adopt dormant Commerce Clause standards from Pike v. Bruce Church, permitting state AI regulations only if the compliance burdens are not clearly excessive relative to the local benefits. For example, a state law requiring bias audits for hiring algorithms might survive if compliance costs are low and the benefits of anti-discrimination are high. However, a potential law mandating the disclosure of proprietary code could fail as unduly burdensome. State laws avoiding such disproportionate requirements are valid unless pre-empted by federal rules. Additionally, they have included a “reasonable and cost-based fees” exception.
The final two exceptions insinuate that the mortarium would primarily impact state laws that apply distinct regulatory standards to AI systems. Nevertheless, even with these carveouts, the moratorium is poised to significantly reshape the regulatory framework, particularly in the absence of a comprehensive federal regulation to fill the void left by curtailed state-level oversight.
Is the tech industry happy?
The OBBBA would effectively bar states from introducing independent regulations on automated decision-making, algorithmic bias, facial recognition, or data privacy in AI applications. The lack of regulatory oversight equates to a lack of accountability despite numerous AI summits advocating for governance. Technological advancements continue to outpace legal regulation, and significant risks may remain unaddressed if the current trajectory persists. It prioritises innovation over regulation, which amplifies the potential impact of such deregulation on the technological landscape.
As per the law, the total funding for border technology is $70 billion, which includes investments in AI-enabled surveillance towers, drone systems, and integrated communication backbones. While this is a welcome step by IT vendors and cloud providers, it also represents a blatant overreach. It marks the beginning of an era of always-on federal monitoring for those who advocate for privacy. Interestingly, tech giants like OpenAI and Anthropic have quietly supported the measure, arguing that a unified federal framework will provide regulatory clarity and prevent innovation from being stifled by numerous state laws.
One premise supporting this less regulatory aspect of the bill is the pro-innovation stance taken by tech giants and scholars like Adam Thierer. They emphasise the need for a more flexible, adaptive, bottom-up approach in governance strategies to address algorithmic concerns and to foster AI growth. While he has acknowledged AI discrimination and bias, his stance relies on a “permissionless innovation” framework.
His argument prioritises minimal AI regulation to foster technological developments over ethical considerations, which critically underestimates the systemic biases embedded in AI. Without any robust oversight, the systems would penetrate and amplify the societal inequalities. In 2019, in HUD v. Facebook, the US Department of Housing and Urban Development charged Facebook with violating the Fair Housing Act. The platform was allegedly using targeted advertisements that discriminated based on race and colour.
This ‘innovation’ first approach overlooks that biased systems can become entrenched at scale, marking corrections exponentially more costly than proactive regulation. The argument conflates short-term market efficiency with long-term societal welfare, overlooking how unethical practices erode public trust and create liabilities that ultimately damage the innovation ecosystem. Several cases exist to demonstrate that AI tools are prone to biases. In one of the most closely watched legal challenges in the case of Mobley v. Workday, there was a crucial challenge to using AI in employment decision.
Conclusion
The claim of fostering innovation is illusory; rather than levelling the playing field, it entrenches the dominance of established tech giants. If the innovation rests on the pillars of perpetuating inequalities, it becomes exclusionary. Startups would be left without meaningful support, while market advances remain concentrated among those already at the top, thus doing little to enhance competition. We know AI is advancing at a rapid, unseen pace, and giving it free rein without regulations for 10 long years might result in untoward consequences.
The next part of this article will discuss in detail the adverse consequences of leaving the AI unregulated.
