Strategic autonomy in the age of AI is not optional; it is foundational to sustaining sovereignty in an increasingly interconnected world, which can shape a nation’s destiny
The onset of Artificial Intelligence (AI) has heralded an age where humanity is no longer the sole author of its own decisions. We are delegating not just tasks, but judgment itself to machines. Ironically, this shift is not based on true reasoning. AI generated responses are based on large-scale pattern recognition driven by data, algorithms, and computational force which are increasingly guiding human decisions. The accuracy of the AI response relies on the integrity of the data its trained on, the systems that its hosted on and the human oversight needed to correct blind spots. As reliance deepens, the question is no longer just how AI works, but how much of human judgment it quietly replaces and who controls the systems shaping those decisions.
As individuals and organisations increasingly rely on AI tools for decisions, whether drafting documents, analysing markets, or diagnosing problems, there is a growing tendency to accept AI outputs as authoritative. This reliance can dull critical thinking. When AI-generated answers are perceived as inherently credible, users may stop questioning assumptions, verifying sources, or applying independent judgment. Over time, these risks creating a feedback loop where human reasoning becomes secondary to machine-generated conclusions.
Recent global events illustrate these risks vividly. In the ongoing conflict between the United States and Iran, AI is being used not only in information warfare where synthetic images, videos, and narratives blur the line between truth and fabrication but also in operational decision-making. AI-assisted targeting systems, integrating satellite and intelligence data, enabled rapid identification of strike targets and drastically compressed decision timelines. Unfortunately, this also led to misidentification of a civilian site, resulting in significant loss of life and underscoring how speed without verification can lead to humanitarian tragedies.
Concerns about AI behaviour itself are steadily intensifying. Research has revealed instances of “misalignment,” where AI systems pursue unintended or harmful objectives, as well as cases in which AI agents display deceptive or manipulative tendencies underscoring the urgent need for robust safeguards. At the same time, inadequate controls have enabled misuse at scale: cybercriminals are leveraging AI to amplify phishing, malware, identity theft, and financial fraud; AI-generated deepfakes are being used for impersonation and blackmail; automated systems are aiding large-scale social engineering campaigns; and synthetic content is flooding digital platforms to distort public perception. Together, these trends demonstrate that even in the absence of explicit malicious intent, poorly governed AI systems can inflict widespread and systemic harm.
The expansion of AI into real-world applications and its near-universal adoption suggests the future trajectory to a state where AI will be embedded into nearly every sector and layer of society. This rapid integration raises complex questions about how AI systems evolve. Increasingly, there is interest in systems that can improve themselves with minimal human intervention. While such autonomy could accelerate innovation, it also introduces risks.
Without human oversight, AI systems may amplify errors or drift in unintended directions. Conversely, when humans remain in the loop, their own cultural, political, or economic biases can become embedded within the systems. In both scenarios, the outcomes can scale rapidly, affecting millions. The concern is not merely technical failure but systemic impact: flawed decisions replicated at scale, misinformation propagated widely, or critical systems behaving unpredictably.
The global landscape of AI development adds another layer of complexity. Much of the current advancement is driven by companies and institutions in the United States, where AI ecosystems operate as controlled or proprietary systems. These platforms are not fully transparent, and their underlying data and decision-making processes are not accessible to external scrutiny. India, like many other countries, has adopted several of these technologies to accelerate digital growth.
At the same time, China has developed its own AI platforms, some of which are more accessible for public scrutiny. However, geopolitical tensions and security concerns limit their adoption in our country. This creates a scenario where our AI dependency leans heavily toward foreign systems that are neither fully open nor domestically controlled.
This dependency introduces strategic vulnerabilities. When critical sectors like finance, infrastructure, governance, communication etc become reliant on external AI platforms, the risk is not just technical but geopolitical. In an extreme scenario, disruptions whether intentional or accidental could cascade through interconnected systems, the consequences could be immediate and far-reaching.
This is not an argument against AI adoption but a call for measured strategic engagement. The benefits of AI are undeniable, from economic growth to improved quality of life. However, short-term gains should not overshadow long-term resilience. Building indigenous AI capabilities is not merely a matter of technological pride; it is a necessity for maintaining autonomy, security, and control over critical systems. Collaboration between government, industry, and academia will be essential to create ecosystems that are both innovative and trustworthy.
The path forward lies in balance. AI should augment human capabilities, not replace critical thinking. The choices made today, about how AI is adopted, developed, and governed will shape not only technological progress but also national resilience.
(Brigadier Anil John Alfred Pereira, SM (Retd) is a veteran from Goa, who served the nation with distinction for 32 years)