Thanks for reading. If you found value in this article and want to share it with others, a pdf copy is available at the link below.

The Biggest Threats AI Poses to Humanity: A Warning from the "Godfather of AI"

August 25, 2025

Geoffrey Hinton, known as the "Godfather of AI," left Google in 2023 to be open to speak about the a world with A.I. he helped create. In his recent interview with Steven Bartlett on The Diary of a CEO podcast, Hinton explained that his "main mission now is to warn people how dangerous AI could be." His concerns are both immediate and existential, from misuse of AI by bad actors to the potential extinction of humanity itself.

Hinton divides AI threats into two fundamental categories: "There are the risks that come from people misusing AI — and that's most of the risks and all of the short term risks. And then there are risks that come from AI getting super smart and deciding it doesn't need us."

The Five Most Dangerous Near-Term Threats

Threat #1. Cyber Attacks on Steroids

Between 2023 and 2024, cyber attacks "increased by about a factor of 12, 1,200%. And that's probably because these large language models make it much easier to do phishing attacks." AI systems can now clone voices and create sophisticated deepfakes, making scams virtually indistinguishable from legitimate communications. Unlike human hackers, "AI is very patient, so they can go through 100 million lines of code looking for known ways of attacking them."

Threat #2. AI-Designed Biological Weapons

Perhaps the most chilling immediate threat is AI's ability to create deadly viruses. "It just requires one crazy guy with a grudge...you can now create new viruses relatively cheaply using AI. And you don't need to be a very skilled molecular biologist to do it," Hinton warns. "For a few million dollars, they might be able to design a whole bunch of viruses." A super-intelligent AI seeking to eliminate humans could "make a virus that is very contagious, very lethal, and very slow, everybody would have it before they realized what was happening."

Threat #3. Election Manipulation and Propaganda

AI could enable unprecedented election interference through targeted political advertisements. "If you wanted to use AI to corrupt elections, a very effective thing is to be able to do targeted political advertisements where you know a lot about the person." With comprehensive data on individuals, AI systems can craft personalized messages to manipulate voting behaviour or discourage participation entirely.

Threat #4. Lethal Autonomous Weapons

The development of weapons that can kill without human intervention poses a dual threat. Beyond the obvious danger of malfunction, "it's going to make big countries invade small countries more often" because "if instead of bodies coming back in bags, it was dead robots, there'd be much less protest." This dramatically reduces the friction and political cost of warfare.

Threat #5. Echo Chambers and Social Division

AI algorithms are already fragmenting society by "showing people things that will make them indignant" to maximize engagement and ad revenue. "We don't have a shared reality anymore," Hinton observes, noting the stark divide between people who consume different news sources. This algorithmic manipulation is "causing that. If they had a policy of showing you balance things, they wouldn't get so many clicks and they wouldn't be able to sell so many advertisements."

Mass Unemployment: The Inevitable Economic Shock

Beyond these immediate threats, Hinton warns of massive job displacement. "I think for mundane intellectual labor, AI is just going to replace everybody." Unlike previous technological revolutions, "this is a very different kind of technology. If it can do all mundane human intellectual labor, then what new jobs is it going to create?" His advice? "Train to be a plumber" because "it's going to be a long time before [A.I.] is as good at physical manipulation as us."

The Race for AI Supremacy: Safety Takes a Backseat

Hinton's concerns are amplified by the pace of AI development for profit without proper attention to the risks to humanity. "These big [AI] companies, they're legally required to try and maximize profits, and that's not what you want from the people developing this stuff." He suspects that when OpenAI CEO Sam Altman shifted from warning about AI risks to downplaying them, "that's not driven by seeking after the truth, that's driven by seeking after money."

Recent safety assessments confirm these concerns. Multiple studies have found that major AI companies have "unacceptable" levels of risk management, and a "striking lack of commitment to many areas of safety." Only 3 of 7 firms report substantive testing for dangerous capabilities linked to large-scale risks such as bio- or cyber-terrorism (Anthropic, OpenAI, and Google DeepMind), while every company scored particularly badly on their existential safety strategies.

Corporate Leaders Double Down on Speed

The tech industry's response to safety concerns has been to accelerate, not slow down. At Meta, the Fundamental Artificial Intelligence Research unit has been deprioritized in favor of Meta GenAI, according to former employees. Google co-founder Sergey Brin, in a February memo, urged AI employees to "turbocharge" their efforts and stop "building nanny products."

OpenAI's Safety Rollbacks

Perhaps most concerning is OpenAI's apparent retreat from safety commitments. Ilya Sutskever, the chief scientist who was instrumental in creating ChatGPT, left the company in 2024 over safety disagreements. "I think he left because he had safety concerns," Hinton confirmed. The company had previously committed significant computing resources to safety research but "then it reduced that fraction." The risks of even today's AI—by the admission of many top companies themselves—could include AI helping bad actors carry out cyberattacks or create bioweapons. Yet these same companies are racing ahead with minimal oversight.

Google's Transparency Failures

Google has also shown troubling signs of prioritizing speed over safety. Google's latest Gemini 2.5 Pro AI model is missing a key safety report in apparent violation of promises the company made to the U.S. government and at international summits. This represents a concerning pattern where companies make public commitments to transparency but fail to follow through.

Microsoft Disbands Entire AI Ethics Team

In March 2023, Microsoft disbanded its entire "ethics and society" AI team as part of broader layoffs affecting 10,000 employees, a move that coincided with the company's multi-billion dollar investment in OpenAI. The seven-person team had been "responsible for ensuring Microsoft's responsible AI principles are actually reflected in the design of products that ship" and had been working to "identify risks posed by Microsoft's integration of OpenAI's technology." The timing raised serious questions about Microsoft's commitment to ethical AI development, as the company prioritized speed to market over internal safety oversight in its race to compete with Google's search dominance.

The Existential Question

The ultimate threat Hinton warns about is super-intelligence itself. "If you want to know what life's like when you're not the apex intelligence, ask a chicken." He estimates superintelligence could arrive "in like 20 years or even less." The fundamental challenge is ensuring these systems never want to replace us: "There's no way we're going to prevent it getting rid of us if it wants to. We're not used to thinking about things smarter than us."

His analogy is stark: "Suppose you have a nice little tiger cub... you better be sure that when it grows up, it never wants to kill you, because if it ever wanted to kill you, you'd be dead in a few seconds." When asked if the AI we have now is the tiger cub, Hinton's response was unequivocal: "Yep. And it's growing up."

A Call for Urgent Action

Despite the grim outlook, Hinton maintains that action is still possible. "There's still a chance that we can figure out how to develop AI that won't want to take over from us. And because there's a chance, we should put enormous resources into trying to figure that out, because if we don't, it's going to take over."

The window for implementing proper safeguards is rapidly closing. As Hinton warns, "we have to face the possibility that unless we do something soon, we're near the end." The question isn't whether AI will transform our world—it's whether we'll survive that transformation.

Sources:

  1. Hinton, Geoffrey. Interview with Steven Bartlett. "Godfather of AI: I Tried to Warn Them, But We've Already Lost Control!" The Diary Of A CEO Podcast, June 16, 2025.
  2. Wikipedia. "Geoffrey Hinton." Accessed August 2025.
  3. Future of Life Institute. "2025 AI Safety Index." July 17, 2025.
  4. TIME. "OpenAI, Meta, xAI Have 'Unacceptable' Risk Practices: Studies." July 17, 2025.
  5. IEEE Spectrum. "OpenAI, GoogleDeepMind, and Meta Get Bad Grades on AI Safety." January 22, 2025.
  6. CNBC. "AI research takes a backseat to profits as Silicon Valley prioritizes products over safety." May 15, 2025.
  7. Altman, Sam. Interview with Chris Anderson. "OpenAI's Sam Altman on the Future of AI, Safety and Power." TED2025, April 11, 2025.
  8. CCN. "Sam Altman: AI Safety Decisions Will Be Unpopular—But Mass Surveillance by 'Authoritarian Governments' the Real Threat." February 10, 2025.
  9. CNN Business. "Meta is shelling out big bucks to get ahead in AI." July 25, 2025.
  10. CNBC. "Google CEO Sundar Pichai: AI is more important than fire, electricity." February 1, 2018.
  11. TechCrunch. "Microsoft lays off an ethical AI team as it doubles down on OpenAI," March 13, 2023