12/10/2025 / By Kevin Hughes

The rapid advancement of artificial intelligence (AI) has sparked both excitement and deep concern among industry leaders, with warnings that artificial general intelligence (AGI) – AI that matches or surpasses human cognitive abilities – could arrive within the next decade.
Google DeepMind CEO Demis Hassabis has cautioned that AGI could bring “catastrophic outcomes,” including cyberattacks on critical infrastructure, autonomous weapons and even existential threats to humanity. Speaking at the Axios AI+ Summit in San Francisco, Hassabis described AGI as a system exhibiting “all the cognitive capabilities” of humans, including creativity and reasoning.
However, he warned that current AI models remain “jagged intelligences” with gaps in long-term planning and continual learning. Still, he suggested AGI could become reality with “one or two more big breakthroughs.”
Hassabis emphasized that some AI dangers are already materializing, particularly in cybersecurity. “That’s probably almost already happening now… maybe not with very sophisticated AI yet,” he said, pointing to cyberattacks on energy and water systems as the “most obvious vulnerable vector.”
His concerns echo broader industry warnings. Over 350 AI experts, including OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei and AI pioneers Yoshua Bengio and Geoffrey Hinton, signed a statement from the Center for AI Safety declaring: “Mitigating the risk of extinction from AI should be prioritized globally alongside other societal-scale risks, such as pandemics and nuclear war.”
Beyond infrastructure attacks, AI is already being weaponized for disinformation, fraud and deepfake manipulation. The Federal Bureau of Investigation has warned of AI-generated voice scams impersonating government officials, while deepfake pornography and political misinformation are proliferating.
BrightU.AI‘s Enoch notes that AI has emerged as a transformative technology, revolutionizing various sectors, from healthcare to finance. However, as with any powerful tool, AI’s potential for misuse and weaponization has raised significant concerns.
The decentralized engine notes that AI weaponization refers to the use of AI technologies to cause harm, gain an unfair advantage, or manipulate systems and people. This can manifest in various ways, including autonomous weapons, deepfakes and disinformation, social scoring and surveillance, AI-powered cyberattacks and the development of bioweapons.
Hassabis acknowledged that while AI could eliminate many jobs—particularly entry-level white-collar roles—he remains more concerned about malicious actors repurposing AI for destructive ends. “A bad actor could repurpose those same technologies for a harmful end,” he said.
A 2023 report commissioned by the U.S. Department of State concluded that AI could pose “catastrophic” national security risks, urging stricter controls. Yet, as nations like the U.S. and China race for AI dominance, regulation lags behind technological progress.
Among AI researchers, discussions often revolve around “P(doom)” – the probability of AI causing existential disaster. Hassabis assessed the risk as “non-zero,” meaning it cannot be dismissed. “It’s worth very seriously considering and mitigating against,” he said, warning that advanced AI systems could “jump the guardrail” if not properly controlled.
Hassabis advocates for an international agreement on AI safety, similar to nuclear non-proliferation treaties. “Obviously, it’s looking difficult at present day with the geopolitics as it is,” he admitted, but stressed that cooperation is essential to prevent misuse.
Meanwhile, tech giants continue pushing AI integration into daily life. Google envisions AI “agents” acting as personal assistants, handling tasks from scheduling to recommendations. Yet, Hassabis cautioned that society must adapt to AI-driven economic shifts, redistributing productivity gains equitably.
AI’s potential is undeniable – boosting efficiency, accelerating discoveries, and transforming industries. But its risks are equally profound. As Hassabis and other experts warn, without urgent safeguards, AGI could spiral beyond human control, with consequences rivaling pandemics and nuclear war.
Watch this video about AGI already being around for more than 20 years.
This video is from the TRUTH will set you FREE channel on Brighteon.com.
Sources include:
Tagged Under:
AGI, AI, Anthropic, artificial general intelligence, artificial intelligence, China, computing, cyberattacks, cybersecurity, Dangerous, Dario Amodei, deepfake manipulation, DeepMind, Demis Hassabis, disinformation, FBI, fraud, future tech, Geoffrey Hinton, Glitch, Google, information technology, OpenAI, robots, Sam Altman, US, Yoshua Bengio
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 COMPUTING NEWS
