COMMENTARY | A randomized, controlled trial is the best way to find out if using a government system to file taxes directly with the IRS will improve tax collection while reducing burden on taxpayers.
Last month, the IRS released a study examining the feasibility of Direct File, an IRS-run electronic filing system for taxpayers. If implemented, Direct File would be an online tax preparation and filing tool available to taxpayers at no cost — in short, a free alternative to commercial products or your neighborhood accountant.
The implications of Direct File could be considerable, from expanding the scope and role of the IRS beyond its current functions to uncertain consequences for the $14.3 billion tax preparation industry. Policymakers should secure adequate evidence of the system’s impact before making Direct File a reality.
The release of the feasibility report was accompanied by ominous declarations of the adverse effects of Direct File on American taxpayers. Some decried the initiative as a way to “supercharge” the IRS as “Americans’ tax preparer, filer, and auditor.” H&R Block released a statement describing Direct File as “a solution without a problem.” Putting partisanship aside, Direct File raises significant questions about tax collection, namely the following — will Direct File improve the efficient, fair, and equitable collection of taxes among all Americans?
Fortunately, the tools and policy framework to answer this question already exist. Drawing from the field of program evaluation, the tool is a randomized controlled trial, the “gold standard” of evaluation methods. In brief, as part of piloting the program, the IRS could recruit a sample of taxpayers and randomly assign them to two conditions: one group would use Direct File to file their 2023 taxes and the other would use the range of approaches currently available — using a commercial provider, an accountant or grabbing a calculator and plugging away on one’s own. If the study was large enough, the IRS could intentionally recruit specific groups of taxpayers — English language learners, gig economy workers — to identify any differences in outcomes. The evaluation would yield rigorous evidence about multiple aspects of Direct File and how it compares to the “business as usual” ways that people file taxes, from the accuracy of returns to the burden of filing taxes.
This type of evaluation is particularly powerful because it will assess the causal relationship between Direct File and taxpayer outcomes — that is, does Direct File lead to more accurate tax reporting? To more on-time submissions? Additionally, the IRS could examine how the burden and usability of Direct File compares to the usual ways Americans file taxes. Randomized controlled trials are particularly powerful because they generate evidence about the viability of a new policy or program relative to whatever is currently in place (the “counterfactual,” in the evaluator’s parlance).
The policy framework to support such an evaluation celebrates its fifth anniversary next year — the Foundations for Evidence Based Policymaking Act (“Evidence Act”). In a rare instance of bipartisanship, Congress passed the Evidence Act with the goal of using data and evaluation to guide decision-making. The Evidence Act requires agencies to publish learning agendas, a systematic plan for identifying and addressing policy questions relevant to the programs, policies, and regulations of the agency. The law also requires annual evaluation plans, which describe the evaluations the agency will conduct in order to answer key questions in their learning agenda.
Progress implementing the law is well underway. A quick skim of evaluation.gov reveals the breadth and depth of evaluation activities going on across government — from multibillion dollar grant programs to agency self-assessments of diversity and inclusion initiatives. The activities being undertaken to meet the expectations of this law represent the most ambitious evidence building our government has undertaken — ever.
Among the learning agendas and evaluation plans accessible on evaluation.gov are Treasury’s. And included in the questions in Treasury’s learning agenda: how can the IRS address taxpayer needs and preferences to deliver a better taxpayer experience? The department’s evaluation plan lists research it will perform to answer this question. But those activities don’t include a rigorous study that would assess the extent to which a program like Direct File enhanced or detracted from the taxpayer experience, not to mention whether it improved equitable administration of the tax code.
A randomized controlled trial evaluation of Direct File would generate impartial evidence about the likely effects of the program if it were implemented at scale. But the IRS must act quickly. The design and implementation of a full-scale evaluation would need to begin immediately in order to capture data from the 2024 tax season. Consistent with the Evidence Act, this is the evidence, not partisan accolades or condemnations, that should drive decisions about whether Direct File moves forward.