Can AI Destroy Human Existence? Google Predicts Artificial Intelligence Can Achieve Human-Like Intelligence By 2030
News Update April 07, 2025 02:34 PM

In a new paper that’s already raising eyebrows across the tech world, Google DeepMind researchers have warned that artificial general intelligence—or AGI, the kind of AI that can think and learn like a human—could arrive much sooner than many expect. And with it, they say, comes the terrifying possibility of humanity’s extinction.

The study, co-authored by DeepMind co-founder Shane Legg, doesn’t spell out a specific doomsday scenario. Instead, it urges governments, companies, and societies to start taking the risks seriously before AGI becomes a reality—possibly as early as 2030.

“Given the massive potential impact of AGI, we expect that it too could pose potential risk of severe harm,” the paper says, warning that existential threats that “permanently destroy humanity” are very much on the table.

The authors also emphasize that the decision over what counts as “severe harm” isn’t something companies like Google should make on their own.

“In between these ends of the spectrum, the question of whether a given harm is severe isn’t a matter for Google DeepMind to decide; instead it is the purview of society, guided by its collective risk tolerance and conceptualisation of harm.”

Four Types of Risk—and a Call for Guardrails

The researchers outline four broad categories where AGI could go dangerously wrong: misuse, misalignment, mistakes, and structural risks. These include everything from malicious actors using AI for harmful purposes, to seemingly well-designed systems that behave in unintended—and potentially dangerous—ways.

The report highlights DeepMind’s internal strategy for handling these concerns, especially around misuse prevention, where they believe the threat is most immediate. That includes preventing the use of AGI in terrorism, warfare, or social manipulation.

CEO Calls for Global Oversight

This isn’t the first time a leader at DeepMind has sounded the alarm. Earlier this year, CEO Demis Hassabis said he believes AGI could become real within the next five to ten years, and called for a major international effort to manage the risks.

“I would advocate for a kind of CERN for AGI, and by that, I mean a kind of international research focused high-end collaboration on the frontiers of AGI development to try and make that as safe as possible,” Hassabis said in February.

He added that alongside such a research effort, there should be an international monitoring body—similar to the International Atomic Energy Agency (IAEA)—to keep tabs on unsafe or rogue projects. And beyond that, Hassabis envisions a broader global authority.

“You would also have to pair it with a kind of an institute like IAEA, to monitor unsafe projects and sort of deal with those. And finally, some kind of supervening body that involves many countries around the world that input how you want to use and deploy these systems. So a kind of like UN umbrella, something that is fit for purpose for a that, a technical UN,” he said.

What Exactly Is AGI?

Artificial General Intelligence is still more concept than reality—but the idea is powerful. Unlike today’s AI systems, which are good at specific tasks like writing text, recognizing faces, or answering questions, AGI would be able to reason, learn, and adapt across a broad range of tasks, much like the human brain.

Think of it as a computer system that can write code, compose music, diagnose diseases, and learn new languages—all without being retrained or reprogrammed. It’s the holy grail of artificial intelligence—and the scariest, depending on how it’s used.

While the paper doesn’t predict exactly how AGI might harm humanity, the tone is unmistakably urgent. The researchers argue that the global community should not wait until AGI arrives to start thinking about regulations, safeguards, and moral boundaries.

© Copyright @2025 LIDEA. All Rights Reserved.