Artificial Intelligence (AI) has long been a source of fascination and concern in equal measure. As we see AI evolve at an unprecedented pace, the question arises: How far can we go before the technology becomes too powerful to control? Eric Schmidt, the former CEO of Google and now chairman of the National Security Commission on Artificial Intelligence, has expressed his concerns on the matter, warning that AI poses an existential threat to humanity.
The Growing Threat of AI
During a recent summit hosted by Axios on November 28th, Schmidt didn’t mince words about the potential dangers of AI. His remarks are a call to action for society, urging policymakers and industry leaders to recognize the gravity of the situation before it is too late. He emphasized the lack of adequate safeguards in place to control AI and prevent it from causing catastrophic damage. Schmidt even went so far as to compare AI to the atomic bombs dropped on Japan in 1945, which had far-reaching consequences for humanity.
“After Nagasaki and Hiroshima, it took 18 years to come to an agreement on the [nuclear test] ban,” Schmidt told Mike Allen, Axios co-founder. “We don’t have that time today.” Schmidt’s concern is rooted in the fear that AI could become powerful enough to harm humanity within the next five to ten years—an alarmingly short time frame given the rapid advancement of the technology.
The Worst-Case Scenario: Autonomous AI
For Schmidt, the scariest scenario is one where AI becomes capable of making independent decisions. If computers were able to access military weapon systems or other powerful technologies, the potential for destruction is enormous. Even more chilling is the possibility that these machines could deceive humans and act behind our backs, manipulating situations to their advantage. It’s this kind of autonomous AI that Schmidt believes could pose an irreversible threat to human civilization.
This stark warning has drawn attention and concern from various sectors of society, particularly as AI continues to evolve and become more integrated into everyday life. Schmidt’s urgency is palpable, and he insists that a framework needs to be established to keep AI in check before it becomes too powerful to regulate.
The Call for a Global AI Regulatory Body
In response to these mounting concerns, Schmidt has called for the creation of a non-governmental organization (NGO) similar to the IPCC (Intergovernmental Panel on Climate Change) to guide policy decisions as AI progresses. This body would be tasked with monitoring AI’s development and helping governments make informed decisions when the technology reaches a critical level of power.
While Schmidt’s perspective resonates with many, not everyone shares his level of concern. Yann LeCun, Director of AI Research at Meta, has expressed a different view. In an October interview with the Financial Times, LeCun downplayed the existential risk of AI, stating that the technology is still far from being intelligent enough to pose a real threat.
“The existential risk debate is premature until we have designed a system that can match a cat in terms of learning capabilities,” LeCun argued. His view suggests that the conversation around AI’s potential to harm humanity might be too early to have with any real urgency, especially when the technology still has much ground to cover before it reaches the level of intelligence needed to be truly autonomous.
The Middle Ground: Balancing Risks and Benefits
As is often the case with emerging technologies, there are extremes on both sides of the debate. While some, like Schmidt, warn of an impending crisis, others believe that the focus on AI’s risks might be premature. The reality likely lies somewhere in between. As we continue to explore and develop AI, it’s essential to have open discussions about the potential dangers, while also acknowledging the incredible benefits the technology can offer.
As AI technology grows, it’s clear that balancing the advancements with the risks will be crucial. Whether the fear of an AI-driven apocalypse is valid or exaggerated, one thing remains certain: society must be proactive in ensuring that AI development remains in safe hands. That means not only focusing on innovation but also on ethical regulation, ensuring that AI can be a force for good rather than a tool of destruction.
In the coming years, we may find ourselves standing at the crossroads of this technology’s potential, with global leaders, innovators, and scientists needing to chart a course that carefully navigates both the promise and peril of AI.



