Lawmaker pushes AI companies for more safety and security commitments

Sen. Mark Warner prepares to chair a hearing of the Senate Intelligence Committee in July 2023.

Sen. Mark Warner prepares to chair a hearing of the Senate Intelligence Committee in July 2023. Bill Clark/CQ-Roll Call, Inc via Getty Images

Companies need to take on practices that prevent things like social-scoring, real-time facial recognition and more, according to Sen. Mark Warner.

Sen. Mark Warner, D-Va., chair of the Senate Select Committee on Intelligence, wants artificial intelligence companies to commit to extending existing voluntary pledges to all of their systems and make more commitments to address high-risk areas like real-time facial recognition. 

The push comes after the White House announced a series of voluntary commitments on AI safety, security and transparency from Inflection AI, Anthropic, Google, Meta, Microsoft, OpenAI and Amazon last month and echoes requests Warner made to the White House at that time as well.

Those commitments included the development of a watermarking system for AI-generated content; public reporting on systems’ limitations and capabilities; and pre-release, independent, internal and external security testing.

Warner outlined his concerns about AI security and safety earlier this year in a series of letters to company executives. Even with the recently announced commitments, Warner asserted that companies need to extend such commitments to their full range of AI products, including “less capable models” and open source models released already to the public.

The senator also wants more “lightweight consumer-facing commitments to prevent the most serious forms of abuse,” he writes. 

Warner specifically pointed to “development practices, licensing terms and post-deployment monitoring practices that prevent non-consensual intimate image generation (including child sexual abuse material), social-scoring, real-time facial recognition (in contexts not governed by existing legal protections or due process safeguards), and proliferation activity in the context of malicious cyber activity or the production of biological or chemical agents.”

He also penned a series of letters to companies working in AI that have not made any voluntary commitments — Scale AI, Apple, Stability AI, Midjourney, Databricks and Mistral AI.

“I believe that there are, and will continue to be, opportunities for companies like yours to make additional commitments to users around safety and security of your technologies as well as commitments around trustworthiness and bias,” Warner wrote before asking the companies to answer questions about whether or not they are following the processes outlined in the voluntary commitments and are willing to publicly commit to doing so.

Warner’s ongoing AI work comes as the administration and Congress continue to grapple with how best to address the quickly evolving technology and risks associated with it.

“Responsibility ultimately lies with Congress to develop laws that advance consumer and patient safety, address national security and cyber-crime risks and promote secure development practices in this burgeoning and consequential industry,” Warner wrote in the letters sent to the CEO’s of the seven companies that have already made voluntary commitments. 

The White House said in July that there would be a forthcoming executive order on AI, and the Office of Management and Budget is working on guidance for the government’s use of AI. Still, Biden himself has also acknowledged the need for legislation. 

Suresh Venkatasubramanian — a computer science and data science professor at Brown University who co-authored the White House Blueprint for an AI Bill of Rights while at the the Office of Science and Technology Policy — told Nextgov/FCW via email that “voluntary commitments are helpful if they help shift the culture in the tech industry around best practices for the governance, development, evaluation, deployment and use of systems.”

Still, “they must extend beyond a narrow safety and security framing… to also encompass broad societal concerns such as civil rights and civil liberties, opportunities for advancement and access to vital services,” he continued. “And they must extend beyond a few high profile tech companies to any entities building systems that impact people's lives.”

Ultimately, Venkatasubramanian wrote, the “broader governance infrastructure” needed will include “legislation, executive action, regulatory actions, as well as new technical innovations.”