Major corporate players within the artificial intelligence development space will meet at the White House ahead of more federal guidance and investment in the technology.
The Biden administration announced new artificial intelligence funding and policy initiatives Thursday, furthering its efforts to regulate the field in a way that protects civil rights while continuing to foster innovation.
Thursday, chief executives from tech companies with major AI investments—namely Alphabet, Anthropic, Microsoft and OpenAI—will join Vice President Kamala Harris and other senior Biden administration officials to discuss the threats posed by rapidly-advancing AI technologies, and how public and private sectors and work in tandem to mitigate risks.
Along with this discussion, officials will announce a series of new actions that will contribute to the existing guidance outlined in the AI Risk Management Framework and AI Bill of Rights—launched earlier this year and at the end of last year, respectively—and include a $140 million investment toward launching seven new national AI research institutes.
Private sector companies are at the forefront of creating these technologies. Their inclusion in Thursday’s meeting fulfills the longstanding Biden administration pillar to better incorporate private sector engagement in federal government guidance.
“This new generation of AI is going to call on…responsible behavior from all parties and, you know, clearly there will be things that we are doing and will continue to do in government, but we do think that these companies have an important responsibility,” the official said.
The meeting with top tech leadership will focus on better agreement for responsible AI usage that focuses on improving American lives rather than jeopardizing job security and civil liberties. The senior administration official said that, later this year, several of these companies will participate in an independent public evaluation of their AI systems at DEFCON 31, a large hacking convention.
Within the AI Village at the convention, companies will submit their AI system models to be evaluated by “thousands” of community partners and AI experts to grade their alignment with the AI RMF and Bill of Rights.
“These are important new steps to promote responsible innovation and to make sure AI improves people's lives without putting rights and safety at risk,” the official said.
Along with the private sector demonstration slated to occur at DEFCON 31 in August, the government will continue to disseminate guidance later in the year. The official confirmed that in the coming months, the White House’s Office of Management and Budget will issue further policy guidance on the use of AI by the federal government.
OMB will release draft policy guidance focused on ensuring the U.S. leads the world by example in ethical AI system usage. It will establish policies for federal agencies to responsibly procure and leverage AI to better serve the American people.
With generative AI systems that can quickly produce content mimicking human response and create new content such as images, including deep fakes, songs and long-form written content, lawmakers and regulators have been analyzing how to police the usage of these algorithms and systems, with a slew of bills having been introduced in Congress in recent weeks.
“As President Biden has underscored, in order to seize the benefits of AI, we need to start by mitigating its risks,” a senior administration official told reporters ahead of the Thursday announcement.