An open letter from hundreds of technology leaders, including Tesla chief executive Elon Musk and Apple co-founder Steve Wozniak, calling for a pause on artificial intelligence development induced widespread debate among researchers.
The request to pause AI research comes after ChatGPT, a novel AI language processing tool created at a firm funded by Microsoft, earned worldwide recognition in recent months as knowledge workers leverage the system to complete tasks such as writing emails and computer code in a matter of seconds. The breakthrough sparked a race among many technology firms to integrate mass-market AI systems into their search engines and software products.
Musk and Wozniak endorsed a letter from the Future of Life Institute which noted that recent AI developments could impact the future of human civilization, especially with respect to possible widespread unemployment and decreased reliability for information channels, and called for government agencies to mandate a six-month pause for the creation of powerful AI systems.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects,” the document said. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Other technology experts said that the concerns in the letter were overblown. Daniel Castro, the director of the Center for Data Innovation at the Information Technology and Innovation Foundation, released a statement characterizing the letter’s claims as “outrageous and unfounded,” asserting that American firms must remain competitive with ambitious Chinese rivals.
“The sky is not falling, and Skynet is not on the horizon,” Castro contended. “However, AI advances have the potential to create enormous social and economic benefits across the economy and society. Rather than hitting pause on the technology, and allowing China to gain an advantage, the United States and its allies should continue to pursue advances in all branches of AI research.”
Gary Marcus, a professor emeritus at New York University, signed the letter but clarified that the document is not “perfect” even as the general inclination to ask for a slowdown in AI development is correct. “The big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize.”
Eliezer Yudkowsky, a decision theorist and lead researcher at the Machine Intelligence Research Institute, meanwhile wrote that the open letter from the Future of Life Institute fell far short of acknowledging there is a salient risk that “literally everyone on Earth will die” as a result of a major company developing a superhuman AI system.
CLICK HERE TO GET THE DAILYWIRE+ APP
“To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails,” Yudkowsky wrote in an opinion piece for Time Magazine. “Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”
Western governments, as the open letter suggested, have started to develop standards for ethical AI research. The United States introduced a framework for the military use of AI last month, while the United Kingdom released a paper about the necessity to take a position on AI development that prioritizes both innovation and public trust.