Energy officials say their agency can lead the federal approach to national AI R&D efforts

David Turk, deputy secretary of the Department of Energy, testifies during a Senate Energy Committee hearing Thursday.

David Turk, deputy secretary of the Department of Energy, testifies during a Senate Energy Committee hearing Thursday. Drew Angerer / GETTY IMAGES

Agency witnesses testifying at a Senate hearing spoke to Energy’s posture in addressing the “grand challenge” in safely developing and deploying artificial intelligence. 

As artificial intelligence systems continue to rapidly advance past federal regulation, Department of Energy officials testified before the Senate that their agency is best poised to thread the needle between the beneficial opportunities that come with AI software and the drawbacks of a still-emerging technology.

Experts testifying before the Senate Committee on Energy and Natural Resources on Thursday touched on familiar points in the AI and emerging tech policy discourse: competing with China while protecting intellectual property and fostering beneficial use cases such as vaccine development, material sciences investigation, electrical grid regulation and natural disaster risk evaluations.

David Turk, deputy secretary at the Department of Energy, spoke to his agency’s posture in funding and overseeing research in emerging sciences like AI, noting specifically the existing computing infrastructure to support high volumes of data processing to train and run advanced AI models.

“DOE can play an incredibly important role here, including developing methods for assessing and red-teaming AI models to identify and mitigate the risks presented by these cutting-edge AI systems that are only developing and improving incredibly quickly, over weeks and months ahead,” he said in opening remarks. 

He and fellow witness Rick Stevens, the associate laboratory director for Computing, Environment and Life Sciences at Argonne National Laboratory, described the national labs in the U.S. as a key asset in furthering AI research.

“We've got to remember our national labs don't just work for the Department of Energy,” Turk said. “They work for all the other agencies, and a lot of other agencies already have a lot of programs including in the biodefense and biotech area.”

Amid the potential national laboratories can contribute in the nation’s bid to harness new AI systems for public benefit, Stevens, who called it “imperative” for the U.S. to lead in scientific and national security applications, highlighted two critical steps needed to better understand — and therefore mitigate the dangers of — AI system capabilities. One such step is assessing risks in existing models at scale, a painstaking process that could actually require other AI systems to help human end users. 

“There are over 100 large language models in circulation in China,” Stevens said. “There's more than 1000 in circulation in the U.S. A manual process for evaluating that is not going to scale. So we're going to have to build capabilities using the kind of supercomputers we have — and even additional AI systems — to assess other AI systems, so we can say ‘this model is safe, it doesn't know how to build a pandemic or won't help students do something risky.’”

The other focus area he cited is aligning large language models with human-centric values. This imperative was already outlined in both the AI Risk Management Framework produced by the National Institutes of Standards and Technology and the White House’s AI Bill of Rights guidance, but Stevens said that there is still much to be learned in developing an appropriate rubric to measure AI safety.

“That's a fundamental research task,” he said. “It's not something that we can just snap our fingers and say, ‘we know how to do it.’ We don't know how to do it…that's one of the goals that we'd have to have in a research program like this. So we need scale, the ability to assess and evaluate risk in current models and future models and we need fundamental R&D on alignment and AI safety.”