One Year In, the Evidence Act is Producing Results

wan wei/

Most have identified evaluation officers and chief data officers and have started developing learning agendas that identify priority areas.

In the midst of partisan rancor over impeachment, the possibility of war, and low public trust in government, a bright spot may surprise you: An increasing number of federal agencies are quietly improving the way agencies deliver better results for the American people. 

A year ago today, President Trump signed the Foundations for Evidence-Based Policymaking Act, bipartisan legislation championed by an unlikely duo, then-House Speaker Paul Ryan, R-Wisc., and Sen. Patty Murray, D-Wash. Better known as the Evidence Act, it tasked 24 large federal agencies with reviewing the vast amounts of program data they collect, safeguarding citizen’s private information, and better using the data to improve program effectiveness. The law also encourages agencies to adopt a “yes, unless” mentality to data sharing: allowing, whenever possible, qualified researchers and others to access data to help agencies determine what works.

One year later, those agencies are moving the ball forward. Most have identified evaluation officers and chief data officers and have started developing learning agendas that identify priority areas for evaluation and evidence building. The goal of these efforts is to increase the return on investment of spending and strengthen a culture of learning and improvement. 

It’s challenging work, since this area is relatively new for many agencies. To help, the Office of Management and Budget released guidance to inform agency efforts. Importantly, civil servants are treating the law like an opportunity to meaningfully improve results rather than a check-the-box compliance exercise to meet the law’s requirements. 

An important resource is the set of leading agencies that are most advanced in terms of building and using evidence. Nine are highlighted in Results for America’s 2019 Investing What Works Federal Standard of Excellence, which ranks agency data and evaluation efforts. Education, Labor, and the Administration for Children and Families within Health and Human Services, for example, have senior data and evaluation leaders who have developed learning agendas to focus their evidence-building resources.

Smaller agencies are making progress, too. The Corporation for National and Community Service issued an evaluation plan and increased its investment in evaluations to over one percent of its budget. The Millennium Challenge Corporation, an agency focused on global development, spent over five percent of its budget on evaluation work and created user-friendly briefs to share findings with their stakeholders.   

As agencies continue implementing the Evidence Act in 2020, we have four suggestions to help them get the most out of their efforts:    

  • Involve agency leadership. Ensure that the act’s requirements are being implemented in ways that are useful to senior decision-makers, including advancing their top priorities and informing their budget or policy decisions. That means involving leaders in the process, such as deciding how the agency’s learning agenda will inform agency decisions. Without this step, the law’s elements could easily become a compliance exercise.
  • Inventory current evaluation efforts. Developing an inventory of ongoing program evaluations and those that will take place over the next few years provides a useful radar of what’s already going on in an agency. This can help inform an agency’s capacity assessment, learning agenda and evaluation plan, all required by the act. More broadly, the steps agencies take to implement the act should build on their current evidence work, rather than being seen as something totally new.  
  • Ensure diverse stakeholders inform learning agendas. Developing an agency learning agenda should not be an insular process because useful learning agenda questions will come from inside and outside an agency. To do that, engage a range of stakeholders, from staff (including program offices, budget, policy, and grants management offices) to external experts and associations. To see a great example of broad stakeholder input in a learning agenda, check out HUD’s Research Roadmap.
  • Tap external researchers to help answer learning agenda questions. A common misperception is that a learning agenda identifies research or operational questions that the agency, alone, should work to answer. In fact, partnerships with external researchers can be an important source of help to answer those questions. Reaching out to researchers in the field is a useful first step. 

As agencies consider these steps, they should keep coming back to the “Why?” Why is my agency pushing forward on evidence? If the answer is mainly to satisfy OMB’s guidance, then they should consider looping back to our first suggestion to make evidence useful to leadership.

The Evidence Act may not stem the tide of partisan bickering. Behind the scenes, however, it is already helping agencies make data more useful in achieving real results. And these results will speak for themselves. When agencies successfully improve outcomes for the American people, they help build trust in government.

Jed Herrmann is vice president of state and federal policy implementation at Results for America. 

Robert Shea is a principal at Grant Thornton and served on the Commission on Evidence-Based Policymaking. 

Andrew Feldman is a director at Grant Thornton and served as a special adviser on the evidence team at the White House Office of Management and Budget in the Obama administration.