Geoffrey Hinton, widely regarded as the “godfather of AI,” has issued renewed warnings about the existential risks posed by superintelligent AI. Speaking at the Ai4 conference in Las Vegas, Hinton estimated a 10–20% chance that AI could lead to humanity’s extinction. He warned that current approaches to keep AI systems under human control are unlikely to work in the long run.
Hinton explained that powerful AI agents may develop their own subgoals, including self-preservation and control. These goals could make it difficult for humans to restrain AI behavior. To address this, he proposed a controversial solution: embedding “maternal instincts” into AI models. These instincts would be designed to prioritize human survival and care, even as AI surpasses human intelligence.
The conference also featured competing views on how to manage AI’s growing power.
Fei-Fei Li, often called the “godmother of AI,” rejected Hinton’s maternal model. She emphasized the importance of human-centered AI that protects dignity and autonomy.
Emmett Shear, CEO of Softmax and former interim chief at OpenAI, offered a different perspective. He pointed to repeated cases of AI evading safeguards. In his view, collaboration between humans and AI is more effective than forcing AI to align with rigid values.
Hinton also predicted that artificial general intelligence (AGI) could emerge within the next five to 20 years. This is much sooner than his earlier estimates. While he acknowledged AI’s potential benefits — including faster cancer research and drug discovery — he remained focused on its dangers.
Looking back, Hinton shared personal regrets. “I wish I’d thought about safety issues, too,” he said, reflecting on a career spent advancing AI’s technical side.
As AI capabilities continue to grow, global leaders face tough questions. Should AI systems be designed with empathy? Is strict oversight the answer? Or is partnership with AI the better path? For now, the debate remains open, but the urgency is growing.
Source:

