The Frankenstein War

October 2, 2025

DIANE FRANCIS

Ukraine was the Silicon Valley of the former Soviet Union for decades, and it was mostly Ukrainians, not Russians, who put the first man into space and pioneered the development of artificial intelligence (AI), rocketry, aviation, IT, medicine, cybersecurity, and weaponry. Ukrainians invented the X-ray, the modern helicopter, the CD (compact disc), facial recognition, Grammarly, WhatsApp, and PayPal, and many other innovations. However, for decades, Ukraine’s world-class brain power was unknown and expropriated by Moscow. But now it is on display globally as Ukrainians have reinvented war technologically to prevent Russia from destroying their nation. A world-class army, air force, and navy have been forged from scratch with software, robots, and drones. At the helm of this transformation is Ukrainian President Volodymyr Zelensky’s government, which harnessed the country’s computer science and software elite to defend the nation. But last week, Zelensky issued a warning to the United Nations that must be heeded: “We are now living through the most destructive arms race in human history because this time, it includes artificial intelligence (AI). Weapons are evolving faster than our ability to defend ourselves against them.”

Zelensky stated that the world must regulate these weapons before it’s too late. For Ukraine, this is not a theory but a lived experience. The country has fought successfully for more than three years and invented the world’s first AI-driven conflict. Ukrainian drones that think, artillery guided by algorithms, and autonomous targeting systems are no longer the future — they are deciding life and death every day on land, sea, and air. Others attempt to catch up. Thus, Zelensky’s message was blunt: “We need global rules — now — for how AI can be used in weapons. And this is just as urgent as preventing the spread of nuclear weapons,” he said. “War machines are being built by man that are beyond man’s control. It’s only a matter of time, not much, before drones are fighting drones, attacking critical infrastructure, and targeting people all by themselves, fully autonomous and no human involved.”

Theoretical warnings have been issued in the past about the dangers of uncontrolled AI. Elon Musk stated that AI poses a greater risk than North Korea and could potentially endanger humanity unintentionally. Late British physicist Stephen Hawking also believed that the development of full AI could lead to the end of the human race. But, most ominously, Russia’s Vladimir Putin commented publicly on AI’s potential for both opportunities and threats, suggesting leadership in AI would translate to global dominance. Currently, Moscow races to catch up with Ukraine. So does China.

International law and ethics have not caught up with this new reality. Treaties ban landmines and chemical weapons. Yet autonomous weapons have slipped through the cracks. For years, there has been an international movement by scientists and technologists to ban Lethal Autonomous Weapons Systems (LAWS), also known as “killer robots,” which are defined as machines capable of selecting and attacking targets without human intervention. However, instead of bans or controls, the world operates under “voluntary principles”, such as the Pentagon’s AI ethics or

the European Union’s “trustworthy AI.” But all are inadequate because there is no enforcement capability, specifications, or punishment, which means that no entity can or will stop Russia, China, Iran, North Korea, or anyone else from unleashing killer drones and robots.

Zelensky’s concern is not theoretical and is based on weapons currently on battlefields or in military inventories. For instance, Russia’s Poseidon is a “doomsday device” — a nuclear-powered, nuclear-armed undersea drone designed to cross oceans undetected and detonate off enemy coasts, creating radioactive tsunamis. Russian President Vladimir Putin personally unveiled the monster weapon a few years ago and, in February 2024, said it was “about to complete its testing stage.” The Pentagon has called the Poseidon a “novel and destabilizing” system that is outside all existing arms treaties. Unlike conventional nukes, this undersea drone is designed to navigate and strike autonomously once launched — the very embodiment of Zelenskyy’s nightmare. Russians claim that one of their nuclear-armed torpedoes could decimate a coastal city.

Another frightening AI weapon is the “kamikaze drone”, or “loitering munitions”, that hunt, hover, and dive onto targets. Already used in Ukraine, these operate without humans. “The high-precision munition is capable of independently detecting targets and engaging them according to pre-defined parameters, like the most advanced models of its class,” pledges Russia’s manufacturer Kalashnikov. Of such weapons, Zelensky commented: “Now, there are tens of thousands of people who know how to kill professionally using drones. Stopping that kind of attack is harder than stopping any gun, knife, or bomb.”

The Pentagon is also involved in stealth projects to match these capabilities, and China is openly industrializing the AI battlefield. In 2024, Beijing unveiled rifle-mounted “robot dogs” trotting in formation with troops. They are remote-controlled, but with improved “vision” systems, and able to patrol and shoot on their own. The People’s Liberation Army (PLA) is also testing drone swarms designed to overwhelm defenses, communication systems, and deliver coordinated strikes. Beijing has deputized dozens of civilian robotic companies to collaborate with its military and aims to “intelligentize” warfare. A Pentagon report chillingly concluded “the PLA seeks to integrate AI across the ‘kill chain’, from intelligence fusion to autonomous engagement.”

Back in 2015, over 1,000 AI researchers and experts signed an open letter warning against allowing this type of AI arms race. “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control”. But since then, the use of AI has spread, and the United Nations has uncovered many incidents that raise ethical and legal concerns. Estimates of total casualties thus far are unknown, but a drone strike in Libya in 2020 was considered by the UN Security Council to be the first time such an autonomous weapon killed human beings. After 2021, drones caused many civilian deaths in Africa in various military skirmishes or refugee camps. A few years later, drones caused accidental deaths in Afghanistan because farmers carrying tools were mistaken for combatants. Now, in Ukraine, drones heavily populate the battlefield, skies, and water, and in 2025, the UN Human Rights Monitoring Mission in Ukraine noted the steep rise in civilian deaths caused accidentally by drones.

The AI arms race accelerates globally, which is why Zelensky warned they must be regulated. Autonomous lethal systems must be banned globally and verifiably; human oversight must be required in every “kill-chain”; international verification regimes with sanctions must be created; and civilian AI research must be protected from military misuse. It’s clear that Frankenstein’s military monster has left the lab and, unless humanity heeds Zelensky and AI experts, the next world war may not be fought by humans at all. But humans will certainly be its victims.