White House releases regulatory vision for AI

Prasit photo/Getty Images

The framework includes seven AI policy recommendations for Congress that attempt to balance consumer protections with advancing AI development.

The White House unveiled a new artificial intelligence policy framework on Friday that features seven guiding recommendations to support the administration’s policy recommendations for Congress.

The National Policy Framework for AI’s seven pillars are Protecting Children and Empowering Parents; Safeguarding and Strengthening American Communities; Respecting Intellectual Property Rights and Creators; Preventing Censorship and Protecting Free Speech; Enabling Innovation and Ensuring American AI Dominance; Educating Americans and Developing an AI-ready Workforce; and Establishing a Federal Policy Framework Preempting Cumbersome State Laws.

Policy details within each pillar aim to balance citizen protections — such as eliminating child user data collection, augmenting parental safety controls, ensuring ratepayers aren’t burdened with high utility costs and providing tax breaks for AI adoption in small businesses — with ensuring the U.S. isn’t hindered in advancing AI technologies. For example, while ratepayers — those who pay fees to utility providers — are protected, the framework dictates that permitting reform needs to be undertaken to scale more data centers.

The intense energy demands that AI-supporting data centers have can increase electricity prices for nearby residents unless offsets are made. President Donald Trump announced in his Feb. 25 State of the Union address that he had established an agreement with major tech companies to absorb the surges in energy costs themselves.

Copyright law and creator protections work to thread a similar needle. While the framework recommends that Congress set legislation that protects creators’ voices and likenesses, the administration also acknowledged that its core belief is that AI scraping the internet for copyrighted material is not a violation of U.S. copyright law. The framework further says the U.S. court system has the final say in if an AI developer violates fair use laws. 

Regarding copyright law, the administration also suggests that Congress develop enabling licensing laws or a mechanism for collective rights holders to be able to negotiate compensation for their likeness or content being used in AI.

The framework also features a return of White House calls for state AI law preemption. The Trump administration asks Congress to protect the U.S. AI advantage by rejecting state laws that are deemed “undue burdens.”

Simultaneously, the framework notes that state law preemption will not apply to how states want to use AI technology or in areas where states are uniquely suited to govern certain subject matters. 

“Preemption must ensure that State laws do not govern areas better suited to the Federal Government or act contrary to the United States’ national strategy to achieve global AI dominance,” the framework reads. 

The final three pillars — workforce development, ensuring innovation and free speech protection — are mired in controversy. The framework says the broad U.S. workforce needs to learn a level of AI-fluency, and asks Congress to support non-regulatory methods to expand existing education programs that foster AI education. 

Experimental and isolated sandboxes are also highlighted as a priority area in the framework, which recommends that Congress establish regulatory sandboxes for AI applications to help spur software testing and development.

It also supports Congress providing resources that allow industry and academic partners access to federal datasets for further AI model training. Notably, the framework says Congress should not create any new federal rulemaking body to regulate AI and instead maintain a “sector-specific” approach with existing regulatory bodies.

In keeping with protecting free speech against AI products deemed “biased,” the framework also recommends Congress prevent federal agencies from using “coercing technology providers” that operate their technology upon “ideological agendas.”

Such a recommendation follows Trump’s sweeping Feb. 27 demand that all federal agencies remove Anthropic products from their operations, after the AI company refused to allow the Pentagon to use Claude for missions involving mass surveillance of Americans or to guide autonomous weapons 

The framework also asks Congress to provide recourse for Americans to report censorship activity on or within AI platforms. 

The same morning the framework was released, Republican members of the House swiftly vocalized their support, with House Speaker Mike Johnson, R-La., and Reps. Steve Scalise, R-La., Brian Babin, R-Texas, Brett Guthrie, R-Ky., and Jim Jordan, R-Ohio, issuing a statement pledging to follow the framework’s suggestions. 

“House Republicans look forward to working across the aisle to enact a national framework that unleashes the full potential of AI, cements the U.S. as the global leader, and provides important protections for American families,” the press release reads.

Reactions to the framework outside the federal government were more mixed. The Business Software Alliance said it “welcomes” the framework, underscoring its emphasis on developing an AI-ready workforce, liberating select data for AI training and advancing AI adoption. 

“The Framework helps catalyze a needed conversation in Washington, grounded in the reality that building trust in AI and enabling its broad adoption requires clear, workable national rules for the United States.”

Online business industry group NetChoice also said it supports the adoption of the framework. 

“The Trump administration understands that it was a light-touch regulatory environment, not 50 different confusing and conflicting regulatory regimes, that enabled the internet revolution and that innovation and investment in winning the AI future for America will require a similar approach,” Patrick Hedger, NetChoice director of policy said.

Alternatively, AI watchdog organizations like Americans for Responsible Innovation argued the framework shields AI developers from liability. 

“After witnessing the harmful impact of the tech industry’s move-fast-break-things mantra during the rise of social media platforms, the public wants safeguards now,” ARI President Brad Carson said. “What’s most disturbing is that the framework recommends both banning state laws on AI and urges Congress not to create new ‘open-ended’ liability for the AI industry when it comes to child harms. For the AI industry, that means open season on the American public.”