brand logo
Inside AI’s power brokers

Inside AI’s power brokers

23 Nov 2025 | By Nilantha Ilangamuwa


Saudi Crown Prince Mohammed bin Salman received a warm welcome at the White House from President Donald Trump in a visit that reshaped bilateral relations, though some argue Saudi Arabia gained disproportionate leverage. Beyond formalities and defence deals, the real negotiation is over cognition and computation. 

While states bargain over arms and investment, the architects of humanity’s near future are the technocrats and entrepreneurs controlling intelligence itself. Last week’s insights from Elon Musk and Jensen Huang at the US-Saudi Investment Forum, and Sundar Pichai in his BBC interview, are crucial.

Musk, ever audacious, confronts the existential implications of his own creations with unapologetic bluntness. “My prediction is that work will be optional… It’ll be like playing sports or a video game or something like that,” he proclaimed. 

The bluntness is a harbinger: the very substrate of human occupation may become discretionary, superfluous in an economy dominated by autonomous intelligence and robotics. Tesla’s humanoid robots, Musk asserts, will be “actually useful,” integrated into production, logistics, and potentially domestic life, creating a society in which labour is decoupled from survival. 

He frames Artificial Intelligence (AI) as the sole vector for societal abundance: “There is only basically one way to make everyone wealthy and that is AI and robotics.” In this vision, wealth and relevance are no longer corollaries of human effort but of technological orchestration.

Huang elucidates the technical substratum underpinning Musk’s speculation. The shift from static software to generative, contextually adaptive intelligence is the fulcrum upon which AI’s societal transformation pivots. 

“Today, software is going to be generated in real time… based on the prompt that you give it and based on the circumstance… It is contextually sensible and therefore intelligent,” Huang explained. This is a profound inflection point: intelligence is no longer merely encoded; it emerges in interaction. 

AI becomes a collaborator, anticipating needs, extrapolating solutions, and synthesising knowledge across previously unbridgeable silos. The implications are disquieting as much as exhilarating: the machinery of thought, once the exclusive province of humans, is becoming replicable, scalable, and autonomous.

Pichai provides a complementary prism, emphasising both the scale of investment and the attendant responsibility. Speaking to the BBC, he described Silicon Valley’s current moment as extraordinary even by its own standards: “Maybe four years ago, Google was spending less than $ 30 billion per year. This year, that number is going to be over $ 90 billion… In the next couple of years, we will end up building what we probably built in the past 10-20 years.” 

The velocity of accumulation is unprecedented. The scale of human ingenuity and capital coalescing around AI is akin to the industrial revolutions of prior centuries, but with immediacy that threatens to outpace societal adaptation.

Yet Pichai tempers this enthusiasm with prudence. AI is not infallible; it is probabilistic, and current architectures are prone to errors. “There are moments these AI models fundamentally have a technology by which they’re predicting what’s next and they are prone to errors,” he admitted. 

Gemini, Google’s emergent language model, integrates the power of search to mitigate inaccuracies, yet fallibility persists. Truth, Pichai stresses, remains a human imperative: journalism, verification, and informed judgement cannot be ceded entirely to probabilistic algorithms. The societal calculus becomes a delicate balancing act: accelerate intelligence without compromising epistemic integrity.


Techno-economic implications 


The techno-economic implications are staggering. AI demands computational infrastructure at scales that strain terrestrial capacity. Data centres, already prodigious in energy consumption, could soon surpass the electricity demand of entire nations. 

Here, Musk’s vision of orbital AI becomes salient. By situating computation in space, powered by solar arrays, humanity could circumvent terrestrial energy constraints and atmospheric limitations. The concept is no longer hypothetical: Musk suggests that orbital intelligence could become a critical vector for unfettered AI expansion, a planetary nervous system for computation, independent of geopolitical and environmental frictions.

Pichai echoes the urgency of infrastructure but situates it within a socially responsible frame. AI’s energy appetite is immense, he acknowledges, yet it is also a catalyst for sustainable innovation. 

“We are investing to develop new sources of energy… We just finished signing the largest corporate purchase for nuclear fusion energy with Commonwealth Fusion Systems… many purchase agreements for energy from small modular nuclear reactors… using geothermal energy in our data centres.” 

AI, in this frame, is both a consumer and accelerant of energy innovation, compelling investment in renewable and advanced energy systems that might otherwise lag decades behind.


Societal consequences


The societal consequences are immediate and existential. Musk’s humanoid robots and Huang’s contextually intelligent software foreshadow a profound displacement of labour. 

Pichai admits that AI is already affecting professions from medicine to journalism, law, and the creative industries. Yet his prescription is adaptation rather than resistance: “It doesn’t matter whether you want to be a teacher or a doctor… the people who will do well in each of those professions are people who learn how to use these tools.” 

Here lies the latent tension: proficiency with AI may become the primary determinant of economic and social relevance, reshaping the social contract between skill, labour, and opportunity.

Beyond labour, AI poses philosophical and ethical quandaries. Musk, Pichai, and Huang implicitly acknowledge the concentration of power inherent in frontier AI. Huang observes that the technical sophistication required privileges the largest corporations and states; Pichai stresses the need for industry-wide frameworks and governmental engagement. 

Musk warns against monopolisation: “No one company should own a technology as powerful as AI.” The spectre of concentrated cognitive capital evokes Orwellian concerns, yet it also hints at the unprecedented possibilities of cooperative governance if deployed equitably.

Pichai, in particular, situates the ethical framework within a broader societal responsibility. AI is neither a solitary tool nor a deterministic agent; it is embedded within human institutions. 

“If you only construct systems standalone… you would get less reliable information, which is why I think the information ecosystem has to be much richer than just having AI technology being the sole product in it… Truth matters. Journalism matters.” 

The insistence on pluralism and accountability suggests that AI’s promise is inseparable from the ecosystems it inhabits: human discernment, legal frameworks, and institutional norms remain indispensable bulwarks against misuse.


A world remade


The frontier of intelligence is already exceeding human capacity in domains once thought inviolable. DeepMind’s AlphaFold, as Pichai proudly notes, has computed the folding of 300 million proteins in months — a task that would consume lifetimes of individual research. “It would take one PhD their entire PhD to do one protein,” he said. 

Here, the magnitude of computation is simultaneously awe-inspiring and disquieting: knowledge itself is becoming an artefact of synthetic intelligence, an ontological shift in what it means to discover, understand, and create.

Ethics balances creators’ rights with the public good. Pichai acknowledges that AI training relies upon existing content, from books and music to journalism, yet he emphasises frameworks to respect intellectual property. “Today, when we train, we give people an opportunity to opt out of the training, and we honour copyright in terms of how our outputs are generated.” 

These converging narratives reveal a world remade by artificial cognition. These three men, pulling the levers of the most profound game before our civilisation, show the truth: intelligence is no longer human. It can be engineered, deployed, and scaled beyond any person, any institution, even beyond orbit.


(The writer is an author based in Colombo)


(The views and opinions expressed in this article are those of the writer and do not necessarily reflect the official position of this publication)


More News..