Federal Agencies Enthusiastic But Lagging In Yearly Evaluation Implementation
A new survey showcases the main issues surrounding government offices’ implementing OMB evaluation recommendations.
A report measuring the growth and efficiency of federal agencies’ internal evaluation methods found that despite evaluation officials being adequately equipped and most government offices enthusiastic about evaluation procedures, the implementation of actual results from such processes is broadly lagging.
Authored by the think tank Data Foundation, the report analyzes government agencies’ adherence to the Foundations for Evidence-Based Policymaking Act. Commonly called the Evidence Act, the 2018 law requires public agencies to submit annual reports outlining their plans to support policymaking and other operations within the federal government using statistical evidence.
To do so, agencies are required to provide internal data on their individual business operations to an evaluation officer. From there, the results will be sent to the Office of Management and Budget for further review and subsequent recommendations on improving agency processes.
Researchers with the Data Foundation examined government departments to determine the state of compliance with the Evidence Act. After surveying 24 of the largest government bureaus, several trends pointed to broadly enthusiastic support regarding agency evaluation procedures, but some respondents said they need to equip staff with more knowledge on executing results of each yearly evaluation to improve post-submission implementation.
“Most respondents believe employees in their organization ‘somewhat’ understand what evaluation entails (59%) and how to use evaluation results for improving government programs and services (54%),” the report reads. “The responses suggest room for improvement in bolstering organizational awareness of evaluative thinking and evaluation uses.”
The report also found that among respondents, half believe that evaluation results are only perceived as “slightly or not at all” influential or critical regarding operational decisions within the agency. This trend continued, with 61% and 46% feeling the same regarding the evaluation’s impact on budget execution and regulatory actions, respectively.
One agency respondent noted that there is an “openness” to evaluations and what they mean for a government department, but that most officials are mainly interested in “fast” evidence.
Another component of the poor evaluation execution comes from agencies lacking in sufficient resources. Less than a third of agency respondents said that their offices had enough personnel or staff dedicated to evaluation completion and analysis. Part of these shortages can be attributed to most agencies reporting having few full-time employees dedicated to evaluation efforts.
The researchers issued several recommendations to help tackle some of the problems plaguing government evaluation efforts, namely including evaluation officers as full-time positions with an amendment to the Evidence Act itself.
“With the scope and scale of activities, large agencies are especially well-suited to ensure the role of evaluation officer is a full-time position for an employee of the agency,” the report proposes.
Other suggestions include establishing clearer procedures to implement evaluation-based recommendations, more congressional dialogue between evaluation officers to track evaluation implementation, and better senior agency leadership involvement surrounding education about evaluation results.
“Federal evaluation officials indicate they are optimistic about the road ahead for the next year,” the report says. “With that optimism, the evaluation community, policymakers, and agency senior leaders must lend their support, encouragement, and enthusiasm for the evaluation endeavor to ensure that evaluative thinking is pervasive, evaluation practice is accepted, and evaluation use is expected throughout the federal government.”