What rights do AI companies have in government contracts?

NurPhoto / Contributor / Getty Images

It depends on the acquisition pathway, the contract type and the contract terms.

The Anthropic-Pentagon dispute has drawn significant public attention and an equally large amount of misinformation. After Defense Secretary Pete Hegseth gave Anthropic an ultimatum to allow unrestricted use of its AI models “for all lawful purposes” and the company refused, President Trump directed federal agencies to stop using Anthropic’s products and Hegseth designated the firm a “supply chain risk.”

Hours later, OpenAI announced its own Pentagon deal, claiming it included the same two restrictions Anthropic had been fighting for (no mass domestic surveillance, no fully autonomous weapons) while simultaneously agreeing to the “any lawful use” standard Anthropic rejected.

The public reaction has been chaotic, but most of the commentary, from both sides, reflects a fundamental misunderstanding of how the government buys AI. Commentators are debating whether AI companies should be able to restrict the government’s use of their technology, as if this were a novel question. It is not. Contractors restrict the government’s use of their products all the time. Whether and to what extent they can do so in any particular case depends on three things: the acquisition pathway, the contract type and the negotiated contract terms.

Understanding these variables is essential to evaluating what happened with Anthropic, what OpenAI’s deal accomplishes and what any of this means for the future of AI in the defense space.

How the government buys AI and why it matters

The federal government does not acquire AI through a single, uniform process. It uses multiple acquisition pathways, each of which creates a different allocation of rights and leverage between the government and the contractor. As I have detailed in my article, Buying Blind: Corruption Risk and the Erosion of Oversight in Federal AI Procurement, understanding these pathways is essential to understanding the governance risks that follow from each. Below, I list the most common pathways (there are others, but I won’t attempt to catalog them here).

Commercial acquisition (FAR Part 12)

The most common pathway for federal AI procurement treats these systems as ordinary commercial software. Federal Acquisition Regulation (FAR) Part 12 is designed for the government to acquire goods and services already sold in the commercial marketplace, on commercial terms. The regulation explicitly limits the government’s ability to impose requirements beyond what is customary in the marketplace. Contractors selling commercial AI products are not required to grant the government broader usage rights than those granted to other customers. The government can request expanded rights, but contractors must agree, and this often requires additional consideration.

This means that when the government buys AI commercially, the vendor’s standard terms and conditions, including its acceptable use policies, are the default starting point. Restrictions on use are not some exotic demand by activist AI companies. This is the consequence of the government buying commercial products on commercial terms.

License upgrades and enterprise agreements

Many agencies acquire AI capabilities not through standalone procurements but as add-on features to existing enterprise software agreements, such as Microsoft Copilot or Google Gemini. Because these AI capabilities are offered as upgrades to existing licenses, they fall under the terms of the base agreement. Renegotiating AI-specific terms means renegotiating the entire enterprise deal, which means commercial defaults typically prevail.

GSA multiple award schedule

When agencies order through the GSA Schedule, they inherit whatever terms GSA negotiated at the master agreement level. Downstream ordering agencies have limited authority to modify those baseline terms. If the master agreement includes the vendor’s acceptable use policy, individual agencies generally cannot override those restrictions.

Negotiated procurements (FAR Part 15)

FAR Part 15 gives agencies the broadest latitude to negotiate tailored terms, including provisions regarding usage rights, data rights, transparency and governance. But this pathway comes with high procedural costs and longer timelines. In practice, agencies often avoid Part 15 for fast-moving AI purchases because it is slower and more process-heavy and DOD leadership has emphasized commercial-first, rapid pathways.

Other transactions (OTs)

OTs are non-FAR-based agreements used for research, prototyping and certain production activities. They offer substantially more flexibility than FAR-based contracts. In 2025, the DOD used this pathway to award agreements valued at up to $200 million each to Anthropic, OpenAI, Google and xAI. OTs are exempt from FAR requirements, so the terms are whatever the parties negotiate. An agency can use this flexibility to secure broad usage rights. A contractor can use it to embed restrictions. Either way, the terms are a product of negotiation, not a regulatory default.

What this means in practice

Each of these pathways produces a different set of contractual rights and obligations. The idea that a contractor categorically cannot restrict government use of its products, or that doing so is somehow illegitimate, reflects a fundamental misunderstanding of how government procurement law works. The scope of restrictions is determined by the specific acquisition pathway and what the parties negotiate. None of this is novel or controversial. It is basic procurement law.

What does “any lawful use” mean?

This brings us to the central confusion in the public debate: what does the “any lawful use” standard actually do?

OpenAI has published relevant language from its Pentagon contract. It reads, in part:

The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.

The contract addresses autonomous weapons, surveillance and domestic law enforcement. In the provisions OpenAI has published, the operative language is largely framed by reference to existing legal authorities, including DOD Directive 3000.09, the Fourth Amendment, the Foreign Intelligence Surveillance Act, Executive Order 12333 and the Posse Comitatus Act. The system “shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities.” Domestic law enforcement use is permitted only “as permitted by the Posse Comitatus Act and other applicable law.”

Read on its face, the published excerpt does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use. The operative standard is “all lawful purposes,” conditioned on applicable law and related government requirements and protocols.

That does not mean the language is meaningless. There is an important distinction between the scope of a restriction and its enforceability. Restating legal requirements in a contract may not change what the law requires, but it can change remedies if the government’s use violates a contractual commitment. OpenAI generally would not be a proper plaintiff to assert Fourth Amendment rights on behalf of third parties, but it could frame noncompliance as a breach of its own agreement, and OpenAI states it could terminate the contract if the government violates its terms.

Contrast this with what Anthropic sought: explicit exceptions to “lawful use” that would have barred certain uses even if the government viewed them as lawful. Anthropic describes the impasse as turning on two requested exceptions: mass domestic surveillance of Americans and fully autonomous weapons. From the government’s perspective, that approach would effectively place a private contractor in the position of deciding which otherwise lawful uses were off-limits.

The safety stack: where real leverage may exist

The contract language is not the entire story. In a detailed blog post accompanying its announcement, OpenAI described a multi-layered enforcement approach that goes beyond the four corners of the agreement. This is where the analysis gets considerably more interesting from a procurement law perspective.

OpenAI claims three additional sources of leverage:

Cloud-only deployment with architectural control. OpenAI states that this is a cloud-only deployment—models are not provided on edge devices (where they could be used for autonomous lethal weapons). OpenAI retains what it describes as “full discretion” over its safety stack, including the ability to run and update classifiers that monitor use. The company says this deployment architecture enables it to “independently verify” that its red lines are not crossed.

Cleared OpenAI personnel in the loop. The company states that its own security-cleared employees will be involved in the deployment, and that its safety and alignment researchers will “be in the loop and help improve systems over time.”

Termination rights. OpenAI states that, as with any contract, it could terminate the agreement if the government violates the terms (though the scope of any termination right depends on the specific agreement, including notice, cure and dispute resolution provisions that have not been disclosed).

Additionally, OpenAI makes a notable claim about the temporal scope of the contract’s legal references. According to the company, the contract “explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.” If the full contract language supports this claim, it would constitute a genuine contractual restriction that goes beyond restating current law. Even if Congress amended FISA or DOD revised Directive 3000.09 to permit broader use, the contract could still bind the government to earlier standards.

The published excerpt offers limited support for that reading. The DOD D 3000.09 reference is versioned (“dtd 25 January 2023”), suggesting the contract is keyed to a specific iteration of that directive, though it is not conclusive without seeing the incorporation language. Whether the agreement “locks in” today’s standards, therefore, depends on contract language OpenAI has not published. For example, whether it incorporates these authorities “as in effect on” a particular date, or instead tracks them as amended over time.

These claims deserve careful scrutiny because they reveal something important about where the real contractual leverage lies in AI procurement, and it may not be where most people expect.

If OpenAI retains full discretion over its safety stack and deploys only on its own cloud infrastructure, the practical constraints on government use are architectural, not merely contractual. The government can use the system for “any lawful purpose,” but only to the extent OpenAI’s classifiers and safety systems permit. If a classifier blocks a particular use, the question is whether the government has a contractual right to demand its removal. OpenAI asserts that it retains “full discretion” over those systems.

This creates tension at the heart of the agreement. The contract permits use “for all lawful purposes,” subject to “operational requirements” and “well-established safety and oversight protocols.” OpenAI says it retains full discretion over the safety stack it runs in a cloud-only deployment. If the safety stack blocks a lawful use, which provision controls? The answer depends on the specific contract language governing the relationship between the permissive use standard and the deployment framework—language that has not been made public.

It is also worth noting the irony. The Pentagon’s objection to Anthropic was, at its core, that a private company should not be able to constrain the military’s use of AI technology. Yet the OpenAI arrangement appears to give the company significant operational control over how the technology functions in practice through infrastructure, personnel and classifiers that OpenAI can update unilaterally. Whether this amounts to the kind of constraint the Pentagon sought to avoid with Anthropic depends entirely on the terms governing OpenAI’s discretion and whether the government retains any contractual right to override the safety stack for lawful uses.

Why the acquisition pathway matters for what comes next

The public debate has focused almost entirely on whether AI companies should have the right to impose ethical restrictions on the military. That is a legitimate policy question. But it is the wrong frame for understanding what happened here, and it obscures the procurement realities that will shape AI governance going forward.

As I argue in Buying Blind, the government is acquiring AI technologies through pathways that systematically limit its ability to negotiate protective terms, not just protections for the company, but protections for the government itself. The same commercial acquisition methods that make it difficult for companies like Anthropic to enforce use restrictions also make it difficult for the government to secure adequate transparency requirements, audit rights, data protections and safeguards against contractor lock-in. The emphasis on speed and commercial terms is a double-edged sword: it limits both parties’ ability to impose terms that deviate from commercial defaults.

The Anthropic dispute has focused attention on one direction of this dynamic: companies restricting the government. But the more consequential governance failure runs in the opposite direction: the government’s inability to secure the protections it needs when buying AI through commercial pathways that were not designed for technologies this complex and this consequential.

The government’s punitive response to Anthropic compounds this problem. If the consequence of negotiating aggressively with the government is being designated a supply chain risk—a mechanism more commonly associated with foreign adversary threats—companies have strong incentives to simply accept whatever terms the government demands. OpenAI itself said it does not believe the supply chain risk designation should have been applied to Anthropic. That may lead to faster procurement, but it will yield worse governance outcomes. Companies that are afraid to negotiate are companies that will not push back when the government’s proposed terms are inadequate for either party.

The question the public debate should be asking is not whether AI companies have the right to tell the Pentagon what to do. They do (within the limits of the contract they negotiate), depending on the acquisition pathway, contract type and the terms the parties agree to. The question is whether the government’s current approach to AI procurement produces contracts that adequately protect the public interest. Based on the evidence, the answer is no. The Anthropic-Pentagon dispute, for all the attention it has received, is a symptom of that deeper problem, not its cause.

This post first appeared on Jessica Tillipman's personal website.

Jessica Tillipman is the Associate Dean for Government Procurement Law Studies and Government Contracts Advisory Council Distinguished Professorial Lecturer in Government Contracts Law, Practice & Policy at the George Washington University Law School.