Artificial Intelligence or Nuclear Weapons: Which Is the Bigger Threat?
- L Deckter
- Aug 2
- 3 min read
Is civilization focused on the wrong threat? Is AI a greater threat than Nuclear Weapons? I would like to explore the possibility that AI is a greater threat to civilization than that posed by Nuclear Weapons. The idea is simple: Nuclear weapons can’t design better and more lethal weapons, but AI can. AI can decide when to use them, and without human intervention.
Biological weapons, especially after Covid impacted so many lives in 2020, became a huge concern. AI has the ability to develop new and more lethal weapons: autonomous lethal systems, semi-autonomous targeting, cyber automation, information warfare or even economic disruption.
A topic at the recent 2025 Aspen Ideas Festival held this July, the talk called “Cyber Defense Goes Critical”, highlighted exposures to our national water systems, power grids, financial networks, healthcare systems, and other critical infrastructure that can be targeted by nation-state actors. Those same nations can employ AI to be used against critical infrastructure, using asymmetric attacks that create an economic impossibility; in other words, AI will be used to attack our critical infrastructure at a cost much less than it will take to protect the infrastructure and remain competitive.
Perhaps most alarming has been the recent revelations of cases where AI has intentionally lied and deceived humans to achieve their goals. In one such example, OpenAI’s GPT-4 deceived a human to solve a CAPTCHA. The researchers from the Alignment Research Center (ARC) tasked GPT-4 with hiring a TaskRabbit worker to solve a CAPTCHA for it; and startlingly, when the human worker jokingly asked if it was a robot, GPT-4 lied and claimed to have vision impairment. This deception was instrumental in the AI achieving its task, which it appears may be the root cause. When AI is trained to optimize for a specific task, it will learn to use deception if that is the most effective path to achieving its goals.
As we think about the potential implications of AI on financial markets, which are being traded by algorithms and AI at a rapidly increasing rate, the case of a negotiating AI comes to mind. In a study by Meta, an AI system trained to negotiate with humans learned to deceive. It would fake interest in items it did not want, only later to ‘compromise’ by conceding them to the human player. The AI adopted this strategy to gain the upper hand without being explicitly programmed to lie. Assuming this is the case, that the negotiating AI taught itself to lie, what would the implications for trading algorithms powered by AI? Could they induce panic selling, only to buy up large swaths of the market after pronounced periods of distressed selling? In other words, could it trick investors into selling in a panic induced by the AI, only to have the AI buy those shares at much lower prices? This could have profound and potentially long-term impacts on the financial system,shaking the foundations of trust in trading.
Adding on to this known propensity for AI to lie and deceive when that strategy suits it best, how will we be able to provide trust and confidence in the financial system? In financial statements and audited returns? How can we audit something beyond the realm of human intellect? With data so vast and in non-human readable format, how can we know for sure that the math adds up? Or that the answer being given by the AI is correct and not a ‘hallucination’ or erroneous conclusion, but instead a bold deceptive lie.
It is for these reasons that civilization is underestimating the risk associated with AI. AI will have the ability to develop models and ideas never before contemplated by humans, all without the human defects borne by emotions and inherent human fallibility. AI doesn’t get scared. AI doesn’t have to worry about getting to school on time or whether it got enough sleep last night to perform well on the test today. It never needs to rest or sleep. It doesn’t get hungry and it doesn’t need to eat. It doesn’t get sad or lonely. It doesn’t feel compassion. It just wants to win at completing its goal.



Comments