Coded conflict at quantum speed: AI’s relentless takeover of modern warfare
Samira Vishwas July 10, 2025 05:24 PM

Representational ImagePixabay

Today’s headlines – in most media reporting on Ukraine’s deployment of AI-guided drones capable of autonomous navigation and terminal targeting underscore how swiftly artificial intelligence is rewriting the rules of modern warfare.

With only about 5% of its arsenal AI-enabled, Ukraine is nonetheless showcasing real-world tactical breakthroughs. This immediate convergence of innovation and conflict validates the urgency of examining how AI is transforming the battlefield at quantum speed.”

The twenty-first century battlefield is no longer defined merely by boots on the ground or bombs from the sky. Instead, it is increasingly shaped by invisible lines of code, autonomous decisions, and split-second machine logic. Artificial Intelligence (AI) has emerged as the neural core of modern military power, triggering the most consequential transformation in the history of warfare since the advent of nuclear weapons. AI systems now autonomously track enemy movements, select targets, assess collateral damage, and even launch precision strikes , all at speeds beyond human cognition. In this high-velocity environment, war is fought not just with hardware, but with algorithms. AI’s integration into command structures and combat scenarios has led to a fundamental question: Is strategy still human-led, or are we ceding control to intelligent systems that we only partially understand?

“This is the Oppenheimer moment of our generation.” – Robert Work, Former U.S. Deputy Secretary of Defense

AI and the Israeli Targeting Model: Precision at Ethical Risk

Israel’s recent operations in Gaza revealed how deeply AI can be embedded in modern military targeting. Codenamed systems like Lavender, Gospel, and Where’s Daddy were deployed to process millions of metadata points, real-time surveillance inputs, and behavioral patterns. These AI tools generated thousands of strike targets at rates that would be inconceivable using traditional intelligence methods. Reports by +972 Magazine and The Guardian highlighted that Lavender alone was identifying over 250 new targets daily, based on algorithmic probability of threat.

While this significantly improved operational tempo and strike efficiency, it also raised severe ethical alarms. Human Rights Watch warned of the systemic risks of removing human deliberation from the kill chain. Nadim Nashif of 7amleh observed, “These tools maximized the killing and enlarged the targeted circles. It industrialized war.” The Israeli case underscores a profound paradox: AI may increase accuracy, but it can also amplify moral distance, converting war into an abstract computational exercise divorced from human empathy.

“We are entering an era where civilians become legitimate targets, not by intent, but by AI design flaws.” – Human Rights Watch briefing, 2024

Ukraine: Innovating Under Fire

Ukraine has become the most compelling case study of wartime AI innovation under extreme constraints. In 2024 alone, Ukrainian forces deployed over two million drones, with 96.2% manufactured indigenously. The Ukrainian military, in collaboration with domestic tech startups and international volunteers, built modular AI frameworks capable of executing real-time image recognition, facial tagging, and autonomous navigation.

Swarming technologies, AI-enhanced reconnaissance, and battlefield decision platforms allowed Ukraine to offset Russian air and armor superiority. Systems like Estonia’s THeMIS and Poland’s Perun were adapted with Ukrainian-coded AI modules for mine detection, urban warfare, and precision logistics. As Eric Schmidt, former Google CEO and Pentagon AI advisor, put it: “Drones now adapt faster than tanks can roll.” This live adaptation cycle created a feedback loop where battlefield data continuously improved AI efficiency. In effect, Ukraine became the world’s first combat lab for agile AI warfare.

“The battlefield has become a living lab for AI evolution.” – Kyiv Independent, April 2024

Russia’s Algorithmic Command Doctrine

Russia’s AI military doctrine pivots on speed, deception, and automation. According to RAND Corporation and the Atlantic Council, Russia is pursuing what it calls “algorithmic command” – the use of AI to make battlefield decisions faster than adversaries can react. The doctrine includes autonomous missile targeting, unmanned ground systems, and neural network-assisted electronic warfare platforms.

By 2030, the Kremlin expects 30% of its combat systems to be AI-enabled. This includes robotic artillery, autonomous tanks, and AI-directed battlefield deception strategies. Russian doctrine emphasizes grey-zone operations, cyber-physical disruptions, and the psychological manipulation of both enemy forces and populations via AI-driven disinformation. This approach not only accelerates conflict but destabilizes traditional deterrence models. The fog of war is no longer merely metaphorical, it is now a programmable reality.

“Russian AI doctrine is not about automation, it’s about confusion, speed, and dominance in the grey zone.” – Atlantic Council Report, 2024

U.S. and the Rise of Autonomous Air Dominance

The United States is developing a fleet of AI-powered Loyal Wingmen autonomous jet systems that fly in coordination with human-piloted aircraft. With a budget exceeding $8.9 billion, these aircraft are expected to perform decoy maneuvers, precision strikes, and real-time threat assessments. Field tests at Edwards Air Force Base revealed that AI-operated jets outperformed human pilots in complex dogfights.

The U.S. Air Force’s ABMS (Advanced Battle Management System) integrates AI to enable seamless data-sharing across platforms, reducing response time from minutes to milliseconds. As General Charles Q. Brown remarked, “AI is becoming the brain of every wing, every fleet, every unit.” The emphasis is on full-spectrum dominance: space, cyber, air, and logistics – all orchestrated through distributed AI command nodes. These developments signal a future where the speed of decision-making becomes the new measure of military power.

“AI doesn’t just help you win faster. It helps you reimagine what winning means.” – Clara Lin Hawking, Wired Defense Review

Project Maven and the Intelligence Revolution

The U.S. Department of Defense’s Project Maven has already transformed battlefield intelligence. Initially built to process drone footage, the program uses deep learning to detect vehicles, structures, and human figures with staggering accuracy. What once took 2,000 analysts can now be done by 20, with higher precision and lower latency.

After Google exited Maven in 2018 due to ethical protests, the project was restructured to improve transparency and expand its reach. It now supports predictive maintenance, logistics, and threat modeling. Yet the moral dilemma remains. As Ethan Zuckerman of MIT cautioned, “Speed without clarity is not progress- it’s peril.” Maven’s success illustrates both the potential and peril of turning decision-making into a function of machine logic.

“Speed without clarity is not progress – it’s peril.” – Ethan Zuckerman, MIT Media Lab

The AI Arms Economy: Globalizing the Battlefield

AI militarization is no longer the monopoly of great powers. The global autonomous aviation market is set to grow from $8 billion today to over $32.3 billion by 2033. This democratization of war-tech is fueled by dual-use platforms, modular AI kits, and drone ecosystems accessible to middle-income nations.

Startups and private defense contractors are driving innovation in AI chips, edge computing, swarm logic, and autonomous logistics. These systems are often cheaper and faster to scale than traditional manned platforms. As a result, military advantage is shifting not just to those with nuclear arsenals, but to those with agile codebases. Countries like India, Turkey, and Brazil are emerging as regional leaders in AI-enabled defense exports. The danger is clear: with more actors capable of wielding such power, the thresholds for conflict may lower, not rise.

“Autonomous weapons are no longer futuristic, they’re budgeted, built, and battlefield-ready.” – Jane’s Defence Weekly, 2025

Need for Human Intelligence to Govern Artificial Intelligence

Artificial Intelligence is reshaping not just how wars are fought, but how military power is defined. Speed, precision, and data supremacy are becoming more decisive than manpower or firepower. Yet the central paradox remains: the smarter our weapons become, the more critical it is to retain human judgment. The rise of autonomous systems demands a new doctrine one not just of deterrence, but of digital restraint.

Can democratic oversight, international law, and ethical reasoning keep pace with self-learning machines? Or are we drifting toward a strategic environment where escalation is measured in milliseconds and accountability disappears into the fog of code? The future of warfare will be determined not only by those who deploy AI most effectively but by those who govern it most wisely.

(Major General Dr Dilawar Singh, a Ph.D. with multiple postgraduate degrees, is a seasoned expert with over four decades of experience in military policy formulation and counter-terrorism. He has been the National Director General in the Government of India. With extensive multinational exposure at the policy level, he is the Senior Vice President of the Global Economist Forum, AO, ECOSOC, United Nations. He is serving on numerous corporate boards. He has been regularly contributing deep insights into geostrategy, global economics, military affairs, sports, emerging technologies, and corporate governance.)

© Copyright @2025 LIDEA. All Rights Reserved.