IARPA Leader Rethinks How to Get the Biggest Bang for Research Funds

Flickr user Pictures of Money

An R&D leader is thinking of new ways to evaluate which projects need resources.

President Donald Trump’s budget blueprint, dubbed a “hard-power budget” by Office of Management and Budget Director Mick Mulvaney, slashes federal research and investment dollars at various agencies, especially at the Energy Department.

The administration plans to outright to eliminate the Advanced Research Projects Agency-Energy, an energy research and development unit. And though it’s not mentioned directly in the White House’s preliminary requests, one intelligence community group is bracing for tighter federal research budgets across government.

“We’re very interested in the science of science,” Jason Matheny, Intelligence Advanced Research Projects Activity director, said at a Government Analytics Breakfast event in Washington on Wednesday. He asked audience members for help understanding how investments into areas such as high-performance computing, machine learning and other fields have on technological and economic growth rates.

» Get the best federal technology news and ideas delivered right to your inbox. Sign up here.

“How can we disentangle the impact that those research investments have ... compared to baseline growth?” Matheny asked, emphasizing it's a "hugely important question" as the government contemplates how much funding to provide to the different R&D agencies.

In the realm of intelligence, understanding the relationship between research funding and the emergence of technology could predict “where the United States may lag in the future if we don’t invest where other countries may lead because of patterns of historical investment," he said.

Internally, IARPA evaluates which of its own research projects should be funded by reviewing them every six months. The group spends about 25 percent of its budget on testing and evaluation, and last year hired a chief of testing and evaluation to oversee that process.

It’s challenging to assess research quality, Matheny said. The metrics IARPA submits to Congress, including the number of research contracts awarded or the number of publications resulting from the research, aren’t always the most important, he added.

Instead, IARPA assesses the success of its predictive systems by comparing their predictions to what happens in the future, instead of retrofitting models to reflect what’s already happened in the past, he explained. The forecasting projects vary in focus, anticipating the outcomes of foreign elections, interstate conflicts or whether a treaty is signed.

“That kind of evaluation allows us to see how well systems perform against real events ... seeing how well they predict events that aren’t in the data set that are truly out of sample," Matheny said.