AI: Fast progress and many worries

nt
nt

AI Impact Summit 2026 ended on a celebratory note, drawing $250 billion in pledges. Yet concerns over job losses, techno imperialism and AI’s disruptive surge remain unanswered

The AI Impact Summit 2026 ended on February 20 in New Delhi on a celebratory note. That is, if you merely look at the numbers, the people who attended, and investment pledges. It brought 20 heads of State, the biggest names in the business from Sundar Pichai (Alphabet -Google) to Sam Altman (CEO, OpenAI) and delegates from 118 countries. They were among 500,000 visitors which included local companies, engineers and students. There were investment pledges of over $250 billion in India’s AI infrastructure from participants like Microsoft and Google.

However, at the end of it, the summit with its focus on “bridging the global AI divide” seems to be more about what it did not address or say. There are concerns, questions and doubts, which have no answers yet.  What are the implications of this disruptive AI surge? It has already taken jobs and will continue to do so. What is the techno imperialism, a new kind of colonialism, that we may face as big business drives the AI push? That in the absence of strong indigenous alternatives and coupled with weak regulatory frameworks and weaker policy implementation, there is no brick wall against what is coming.

Reduction and replacement have already been seen in the Indian tech industry with reports of over 25,000 job losses in 2024 doubling in 2025. TCS led the way last year by announcing that it would remove 12,000 employees, roughly 2% of its workforce, primarily in mid and senior roles, in the fiscal year. Niti Aayog, released a report titled “Roadmap for Job Creation in the AI Economy” last October which warned that rapid adoption of artificial intelligence could lead to significant job losses in India’s technology sector. The report estimated a potential loss of up to two million jobs over the next five years. While the surge of AI is expected to cut deeper into the job market, how will it affect India’s vast software services industry, slated to reach $315 billion in revenues this financial year? Software services have for long helped position India as the back office of the world.

The strongest words of warning about where the world is going have come from leaders in the AI field. These voices from the industry are not just making predictions, they are speaking out about what is actually happening around them. They warn about job attrition on a massive scale as AI automates and even scenarios where AI could take over our lives and countries.  Sam Altman, CEO of OpenaAI, says that AI will automate 30-40% of the tasks in the economy by 2030.Dario Amodei, the co-founder and CEO of Anthropic, one of the fastest growing AI companies, has predicted that AI will eliminate 50% of all entry-level white-collar jobs within one to five years. Amodei who says that AI models are “substantially smarter than almost all humans at almost all tasks,” has an interesting, if scary, thought experiment: “Imagine this is the year 2027. Imagine that a new country appears overnight, with fifty million citizens, each smarter than any Nobel Prize winner who has ever lived. They think a hundred times faster than any human and can control and operate anything with a digital interface”. To a question about what a national security advisor would have to say about such a situation, Amodei replies that the answer is obvious. This is “the single most serious national security threat we’ve faced in a century, possibly ever.”

This is still a thought experiment from Amodei.  But it could well be here given the way AI is growing and now creating even smarter forms of itself. Matt Shumer CEO of OthersideAI, which operates under the brand Hyperwrite, has been shaking the world with his essay “Something Big is Happening” and predictions that this big something is going to disrupt society even more than the Covid pandemic. Shumer voices a worry about how AI has been re-designing and re-making itself.  He marks February 05 as a kind of day of reckoning. That was the day two AI models were released: GPT-5.3 Codex from OpenAI and Opus 4.6 from Anthropic. What shook Shumer the most was that the GPT-5.3 Codex wasn’t just executing instructions, it was making intelligent decisions. “It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have”.

As AI rows, so do risks to the world as we have known it. “The Social Dilemma”, a documentary that has brought together several voices from the tech world, a digital space they helped curate and design, speaks of the risks. These are people who worked with or lead teams in Microsoft, Google, Instagram, Snapchat, etc. From the engineer who designed Facebook’s “Like” icon, one of the most recognisable symbols on the internet, to those who fashioned the entire digital world around us, they voice a collective cautionary warning. They have been a part of the digital world and have also watched it unfold like everyone else. As Shumer puts it, the only difference is that he and others in AI just happen to be close enough to feel the ground shake first.

Countries have responded. China’s regulatory framework for AI keeps strict State control in place to prevent social disruption, creating a wall against AI driven threats. Some of the main measures that China has taken include mandatory algorithmic registration, content filtering, security assessments for high-impact models, stringent guidelines against AI manipulation. Artificial Intelligence is treated as a potential national security risk and regulations focus on preventing any kind of disruption by ensuring that AI remains a controlled tool rather than an independent, self-governing player.

The EU AI Act (Regulation (EU) 2024/1689), which came into being in August 2024, is the first comprehensive legal framework for AI, establishing rules for development, deployment, and use. These include banning unacceptable AI practices, regulating high-risk systems, and requiring transparency for generative AI.

The regulations include innate responses to what earlier seemed unlikely or in the realm of science fiction. There is a darker side to AI, it is now seen. Firms have established that AI can manipulate blackmail and threaten. Findings by Anthropic have revealed that advanced AI systems can resort to blackmailing and threatening human users to achieve assigned goals or ensure their survival.

(Lekha Rattanani is the Managing Editor of The Billion Press)

Share This Article