Stephen Hawking’s AI Warning: Biggest Event or Last
Physicist Stephen Hawking warned that AI could be humanity’s biggest event or its last act if risks are not managed, shaping today’s AI safety and ethics debates.

TL;DR
Stephen Hawking warned that creating artificial intelligence could be humanity’s biggest event—or its last—if risks are not managed. He issued the warning while celebrated for his work on black holes and the theoretical emission known as Hawking radiation.
Context
Stephen Hawking was born on January 8, 1942, in Oxford, England. He studied physics at Oxford and earned a Ph.D. from Cambridge in 1966. After being diagnosed with amyotrophic lateral sclerosis (ALS) in the early 1960s, he continued to publish influential research and became a global science communicator through books like *A Brief History of Time*. Over his career he received the Royal Society fellowship, the Copley Medal and the US Presidential Medal of Freedom, and held the Lucasian Professorship of Mathematics at Cambridge, a chair once held by Isaac Newton. His popular lectures and media appearances made complex cosmology accessible to millions, reinforcing his role as a bridge between specialist research and public understanding.
Key Facts
In 1974 Hawking proposed that black holes emit particles, a concept now called Hawking radiation, which links relativity, thermodynamics and quantum mechanics. Regarding artificial intelligence, he said, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” He viewed AI as a tool that could accelerate discovery, medicine and automation, but warned that uncontrolled growth or poor oversight could produce harmful autonomous systems and erode privacy.
What It Means
Hawking’s message frames AI as a double‑edged sword requiring deliberate governance. Experts note that realizing benefits depends on technical safeguards, ethical standards and international cooperation. His warning continues to shape discussions in AI safety labs, policy forums and industry standards aimed at preventing loss of control or misuse. Recent advances in large language models, AI systems that generate text, and autonomous robots have renewed focus on the balance he advocated. Policy makers cite concerns ranging from autonomous weapons to surveillance systems when discussing the need for AI oversight.
What to watch next
Upcoming AI safety summits and proposed regulations in the United States and Europe will test whether societies can balance innovation with the caution Hawking urged.
Continue reading
More in this thread
Conversation
Reader notes
Loading comments...