Physicist Stephen Hawking Says Artificial Intelligence (AI) Could Bring Sudden End to Humankind: Experts Agree IT Arms Race has Insufficient Controls
In an opinion piece in the Independent on Thursday, May 1, famed physicist and academician Stephen Hawking warned that that Artificial Intelligence (AI) is not being regarded with enough caution by the world at large. Without proper precautions, he insists, this nascent technology could bring a sudden end to humankind.
The author of "A Brief History of Time," Hawking wrote that we risk making the most devastating mistake in history with the development of AI because only a few non-profit groups are now studying the large-scale dangers involved. He mentioned the new milestones -- digital personal assistants Siri, a computer winning at Jeopardy, Google Now and Cortana and self-driving cars -- as examples of Artificial Intelligence that will soon be left behind by ever more intelligent iterations of the technology.
To quote, Hawking said that "We are caught in an IT arms race fueled by unprecedented investment and building on an increasingly mature theoretical foundation." These ventures, he said, have the prospects to transform the earth regardless if these were created by Google, Vicarious or other big companies. He said that although AI represents one of the biggest opportunities for advancement in the history of humanity, it may also cause severe problems.
Hawking warned that big companies and governments are not taking enough action to ensure that advanced computer systems they develop cannot grow beyond control, providing as examples military plans to automate the process of selecting and killing enemy targets. He mentioned that research is already underway at the Future of Humanity Institute, Cambridge's Centre for the Study of Existential Risk, the Future Life Institute and the Machine Intelligence Research Institute, but that this is not enough.
Experts in the field of AI appear to share the renowned intellectual's concern. The Register reports that AI company DeepMind's employees demanded that Google agree to the creation of an internal ethics board to guide AI behavior as a condition of Google's acquisition of their company.
Academicians Erik Brynjolfsson and Andrew McAfee, in their book "The Second Machine Age," also indicated that unrestrained AI systems could eventually seriously threaten the world's political stability.