Trade and industry groups warn of risks in GSA’s draft AI procurement guidance
Douglas Rissing/Getty Images
The guidance would establish government rights to use artificial intelligence tools in “any lawful” context, a stipulation that has drawn concern from industry advocates.
Industry advocate organizations are warning against the language the General Services Administration proposed to govern the acquisition and use of artificial intelligence in federal operations, saying it could bring opportunities misuse.
The proposed clause changes, released by the GSA in early March, update federal AI acquisition stipulations. Some of the most notable changes include the government’s right to total input data ownership and government ownership of any custom developments made to a given AI model.
The updated procurement rules also notably grant the government permission to use an AI system for “any lawful government purpose.” Such phrasing follows the explosive dispute between Anthropic and the Department of Defense earlier this year, after the AI company refused to allow its products to be used for surveillance of Americans or for lethal autonomous weaponry.
The ongoing dispute, which resulted in a supply chain risk designation and lawsuit filed by Anthropic, has caused turmoil in the larger vendor and AI contracting community in Washington.
Other proposed changes in the draft touch on contractor responsibilities and roles, incident reporting and the prohibition of using any foreign-made AI products in government workflows.
Despite the inclusion of provisions stipulating that a developer must provide a means for the government to implement human oversight or intervention, it also sets an “eyes off” rule for data handling. That directive would restrict human review of government data except as “strictly necessary” to give the government AI system access or to handle incident reporting.
The language in GSA’s proposed contract terms has raised warning flags for industry organizations.
The nonprofit Americans for Responsible Innovation said in comments shared exclusively with Nextgov/FCW that it has “grave concerns” regarding the proposed changes, providing instances where procuring AI tools for “lawful use” in seemingly innocuous functions could both clash with AI vendor terms of service and violate civil liberties.
An AI program could, for example, inadvertently psychologically profile benefits applicants, conduct surveillance pattern analysis and enable employee loyalty screening — all of which could be hypothetical utilizations of AI tools in good faith but without the proper guardrails or oversight GSA’s language proposes to remove.
“As evidenced by the recent dispute between Anthropic and the U.S. Department of War, there are myriad government uses that are clearly legal but conflict with AI industry terms of service and the public’s expectations as to how emerging technologies will be responsibly integrated into state functions,” ARI said, concluding that the implementation of AI naturally hastens the process to safeguard against potential harms within federal operations.
“A policy of enabling ‘all lawful use’ strips away one of the last public safeguards we have against tyranny,” the letter concludes. “We urge you to reconsider adoption of these proposed changes for our federal acquisitions system.”
The Business Software Alliance echoed these worries, saying that the proposed acquisition language runs the risk of accelerating harms in the government workflows it aims to improve.
“We are deeply concerned… that the proposed clauses significantly inhibit and, in some cases, potentially eliminate, the US government’s ability to leverage the benefits of AI services and commercial advances in AI, impeding mission-driven efforts to prevent fraud and abuse, secure federal data, assets, and systems, and deliver critical citizen services. This, in turn, will produce several cascading negative effects,” BSA wrote.
The alliance warned that the draft language could imperil broader AI innovation and modernization goals central to the Trump administration’s tech policy; diminish contractors’ rights to their intellectual property; create burdensome implementation challenges; and increase the risk of liability for contractors under the False Claims Act.
“Each of these consequences, in isolation, would decrease competition and increase the cost to the US government to procure AI products; in combination, they risk materially compromising the US government’s efforts to not only adopt AI but also accelerate AI innovation in the commercial marketplace,” BSA wrote.
The trade group — whose members include OpenAI, Microsoft, Palo Alto Networks and IBM — offered an array of mitigation measures. It asks that GSA clarify the prohibitions on foreign AI systems and components; expand contractors’ intellectual property rights; improve implementation requirements; streamline change management procedures; and align with existing software acquisition frameworks, among others.
“BSA urges GSA to make significant changes to the proposed contractual terms for AI procurement, which impede the federal government's IT modernization efforts and ability to adopt low-cost, AI commercial solutions from a competitive marketplace.”




