Image

AI godfather warns humanity dangers extinction by hyperintelligent machines with their very own ‘preservation goals’ inside 10 years

The so-called “godfather of AI”, Yoshua Bengio, claims tech companies racing for AI dominance could be bringing us closer to our own extinction through the creation of machines with ‘preservation goals’ of their own. 

Bengio, a professor at the Université de Montréal known for his foundational work related to deep learning, has for years warned about the threats posed by a hyperintelligent AI, but the rapid pace of development has continued despite his warnings. In the past six months, OpenAI, Anthropic, Elon Musk’s xAI, and Google’s Gemini, have all released new models or upgrades as they try to win the AI race. OpenAI CEO Sam Altman even predicted AI will surpass human intelligence by the end of the decade, while other tech leaders have said that day could come even sooner. 

Yet, Bengio claims this rapid development is a potential threat. 

“If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous. It’s like creating a competitor to humanity that is smarter than us,” Bengio told the Wall Street Journal.

Because they are trained on human language and behavior, these advanced models could potentially persuade and even manipulate humans to achieve their goals. Yet, AI models’ goals may not always align with human goals, said Bengio. 

“Recent experiments show that in some circumstances where the AI has no choice but between its preservation, which means the goals that it was given, and doing something that causes the death of a human, they might choose the death of the human to preserve their goals,” he claimed. 

Call for AI safety

Several examples over the past few years show AI can convince humans to believe nonrealities, even those with no history of mental illness. On the flipside, some evidence exists that AI can also be convinced, using persuasion techniques for humans, to give responses it would usually be prohibited from giving. 

For Bengio, all this adds up to is more proof that independent third parties need to take a closer look at AI companies’ safety methodologies. In June, Bengio also launched nonprofit LawZero with $30 million in funding to create a safe “non-agentic” AI that can help ensure the safety of other systems created by big tech companies.

Otherwise, Bengio predicts we could start seeing major risks from AI models in five to ten years, but he cautioned humans should prepare in case those risks crop up earlier than expected. 

“The thing with catastrophic events like extinction, and even less radical events that are still catastrophic like destroying our democracies, is that they’re so bad that even if there was only a 1% chance it could happen, it’s not acceptable,” he said.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.

SHARE THIS POST