What 'Moneyball for Government' really means

Steve Kelman argues that there are two distinct flavors of this data-driven assessment -- and that their advocates don't talk nearly as much as they should.

Image from Shutterstock.com

I first got to know John Kamensky when he was a detailee from the Government Accountability Office, working as a senior staffer for Vice President Al Gore's "reinventing government" program in the 1990s. He is now at the IBM Center for the Business of Government -- still working on improving government management and performance, just from a different berth.

I recently noticed an article John has written for The Business of Government, IBM's quarterly compendium of opinion, interviews, and discussions of research they have sponsored on government management, called Is Moneyball Government the Next Big Thing?

In case any readers have been living under a rock the last several years, Moneyball was a 2004 best-seller by Michael Lewis, later made into a movie starring Brad Pitt, about how the manager of the Oakland A's baseball team used data to select players whose actual performance the marketplace undervalued. The book was a harbinger of a dramatic growth of "data analytics" in the business world -- crunching data to help companies catch market opportunities and improve performance.

Following the normal pattern in which hot private-sector management trends often make it into government after a delay, the term "Moneyball for Government" has been appropriated by many advocates of what had otherwise been called "evidence-based government" -- undoubtedly a sexier moniker for what they are trying to make happen. (The community of people interested in this movement now even has its own Twitter handle -- @Moneyball4Gov. A long handle, but I am guessing they wanted to avoid the shorter "@Money4Gov.")

It is apparent from John's piece -- though he doesn't make the distinction quite as clearly as I'd like -- that "Moneyball for Government" can have two meanings. Both involve using data, but using it in different ways.

The first is to use data to make decisions about which programs "work" and which do not, with the purpose of the exercise being to help make budgetary decisions. Kamensky cites the example of an early childhood program called Even Start, which was designed to prepare poor kids for school but did not show evidence of success and was eliminated by the Obama administration. The administration is also experimenting with "scale-up grants" to provide more funding for local initiatives where there already is good evidence of success, and "validation grants" for programs with some, but limited, evidence of success. With validation grants, the limited additional funding includes money for more evaluation research.

Using evidence to propose budget plus-ups or cutbacks has, of course, been around for a long time -- very notably as part of efforts in the Bush administration to connect performance and budgeting. Not surprisingly, these efforts often run into political headwinds from interest groups whose ox is gored. But anyone who believes in good government should want the role of evidence in these decisions to grow.

But John's piece also indicates that "Moneyball for Government" can also mean something different, more akin to what Oakland A's manager Billy Beane actually did in the original book. Nobody was suggesting that evidence Beane developed would lead to the Oakland A's, or baseball as a whole, being expanded or eliminated. Instead, organizational leadership was using data to improve the performance of the existing organization -- in this case, improving the A's win-loss record by using data to learn which players to seek. This version of Moneyball is closer to managerial use of performance measures to improve performance.

These two ways of doing "Moneyball for Government" have different customers -- budget offices, legislators, and the media for program decisions; agency managers for performance improvement efforts. They have different data requirements: Those who want to use data for budget decisions often insist on expensive evaluations, as often as possible using the "gold standard" of randomized experiments, while managers using data to learn about what works and doesn't in in existing programs are able to proceed pragmatically using much less data. And the two communities don't talk very much to each other, even at the Office of Management and Budget, where both are well-represented.

Perhaps a priority for strengthening "Moneyball for Government" is to get these two communities talking with and supporting each other. There is also a real need in government to train government managers to do simplified versions of data analysis for management improvement purposes.

NEXT STORY: A $300 million IT flop