Nuclear Regulatory Commission to examine more nuanced requirements for AI

NatalyaBurova/Getty Images

A regulatory gap analysis said the NRC’s current federal regulations “are fairly flexible enough to adapt to artificial intelligence,” according to a data scientist with the agency.

The nation’s nuclear energy regulator is looking at developing more concise requirements around the deployment of artificial intelligence, even as a review found that the agency’s current policies can likely be extended to cover many new uses of AI, according to a senior data scientist at the U.S. Nuclear Regulatory Commission.

During a Monday event co-hosted by the Stimson Center and the Vienna Center for Disarmament and Non-Proliferation, NRC Senior Data Scientist Matt Dennis — who leads efforts to implement NRC’s AI strategic plan — said the agency is “sort of in a change mode right now” when it comes to embracing emerging capabilities. 

As Dennis noted, the NRC released an updated mission statement on Jan. 24 that stressed the importance of safe and secure uses of nuclear energy technologies, although it did not explicitly mention AI.

That update came after the agency took preemptive steps in 2023 to roll out its strategic plan around AI, which included conducting a regulatory gap analysis to determine how NRC’s existing policies would be affected by novel uses of AI capabilities. 

Dennis said the review mandated by the plan was presented to the NRC’s Advisory Committee on Reactor Safeguards in November and that “the overall conclusion was that our regulations from the actual code of federal regulations are fairly flexible enough to adapt to artificial intelligence.”

He warned however, that “it gets a little more nuanced” when it comes to verifying “the explainability, the interpretability [and] all those -ilities of AI” to meet the actual regulatory standards.

Dennis said the agency is currently undertaking an effort to determine if it needs to address “targeted areas” of AI use in the nuclear sector and “the requirements or things that need to be shown to prove AI is safe and secure enough.”

He said part of this also includes determining whether the NRC needs to develop one centralized regulatory guide that can inform AI applications for a variety of stakeholders, from agency officials conducting oversight to NRC license applicants. 

Monday’s event — which also included representatives from the Canadian Nuclear Safety Commission and the U.K. Office for Nuclear Regulation, in addition to NRC’s Dennis — came after the three international agencies published a report in September 2024 that outlined “high-level principles” around the deployment of AI capabilities in the nuclear sector. 

Although the document did not lay out specific regulatory steps, it said AI could potentially transform the nuclear industry if regulators and providers focus on “thoughtfully addressing security challenges in AI systems, maintaining a perennial focus on data and being mindful of consensus standards.”

While the trilateral report detailed areas of concern that the three nations should focus on when deploying AI capabilities in their nuclear systems, Dennis noted that “there's not a huge amount of consensus standards” at this time. But he said working to identify specific priorities and benefits can help drive the eventual development of these guidelines. 

“Considering human organizational factors and using the safety and engineering principles that we currently have today — that we use for nuclear power plants and to license those — are a perfect starting point for anyone who is considering using an AI application in the near term for a regulated function,” Dennis said.