Technologists, Experts Call for Halt on Advanced AI Development Over ‘Risks to Society’

Vertigo3d/Getty Images

High-profile signees include Tesla and SpaceX chief Elon Musk and Apple co-founder Steve Wozniak.

More than 1,000 artificial intelligence experts, industry chiefs and technologists have signed an open letter calling for a six-month pause in developing more advanced artificial intelligence systems, citing “profound risks to society and humanity.”

The open letter, authored by the non-profit Future of Life Institute, comes as OpenAI launches the next iteration of its ChatGPT AI platform—ChatGPT-4—the successor to ChatGPT-3.5, which generated headlines for its ability to perform human-like activities, like writing reports and engaging in realistic conversation.

The technology has also caused controversy, from its potential to eliminate certain jobs to its ability to produce powerful disinformation in the hands of bad actors. The letter calls on AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than ChatGPT-4.”

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the letter states.

“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter continued. “Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The letter has accumulated the signatures of numerous high-profile technologists, scientists and AI and policy experts. Among them: Elon Musk, chief executive officer at SpaceX, Tesla and Twitter; Steve Wozniak, co-founder of Apple; Max Tegmark, MIT Center for Artificial Intelligence & Fundamental Interactions and professor of physics; and Lawrence M. Krauss, president of The Origins Project Foundation.

The government is not yet a major consumer of ChatGPT-4-like AI systems, though officials at the Defense Department have promoted the potential benefits of using similar technologies in recent months. At the state and local levels of government, officials have suggested these technologies could work in areas that don’t require human subjectivity, like transcribing and summarizing constituent calls. At the federal level, the National Institute of Standards and Technology recently released its Artificial Intelligence Risk Management Framework, which seeks to guide agencies and organizations toward developing “low-risk AI systems.”

However, NIST officials early this month stressed that creating responsible AI presents “major technical challenges” for technologists.

“When it comes [to] measuring technology, from the viewpoint of ‘Is it working for everybody?’ ‘Is AI systems benefiting all people in [an] equitable, responsible, fair way?’, there [are] major technical challenges,” Elham Tabassi, chief of staff at NIST’s Information Technology Laboratory, said March 6 in a panel discussion.

“It's important when this type of testing is being done, that the impact of community are identified so that the magnitude of the impact can also be measured,” she said. “It cannot be over emphasized, the importance of doing the right verification and validation before putting these types of products out. When they are out, they are out with all of their risks there.”