The United States leads the world in dollars invested in furthering innovation. It won’t for long.
In the summer of 2012, despite the controversy over the solar panel manufacturer Solyndra, a presidential candidate came out swinging for government funding of basic research:
Government has a role to play in innovation in the energy industry. History shows that the United States has moved forward in astonishing ways thanks to national investment in basic research and advanced technology.
That may sound like a classic Obama line, down to the allusion to history. In fact, the candidate was Mitt Romney. The 2012 GOP platform strongly supported government funding for research—making government research and development the closest thing in D.C. to a bipartisan value.
Government funding for research and development is a tough thing to write about. First, it mostly happens out of the public eye. Second, it’s filtered through many smaller agencies and sub-departments, making it hard to track.
It’s also subject to common misunderstandings. Contrary to the popular belief that it’s trending down, U.S. national research and development is in fact slowly increasing as a portion of gross domestic product. That said, American public investment in basic research is mostly flat: It’s industrial investment in development that’s driving the change. American universities are also investing more and more in basic research.
The United States leads the world in dollars invested in research. It won’t for long: China plans to outpace it in the next four years. As a percent of their economy, Israel, Japan, South Korea, and Finland also all invest more in research than the United States does.
So could this country handle research and development better? Here are some common answers—and the problems they face.
There’s no need to read any further: As this 2013 paper from Cato Unbound argues, it’s folly for the government to fund public science. Terence Kealey, a sociologist of science, argues that scientific research is not a public good—and, regardless, investment in, and the advancement of, science will occur regardless of who pays for it. Nikola Tesla and Thomas Edison, after all, did not have DARPA. Following Kealey’s argument, and considering past examples, research should be concentrated in industries, alongside “an armamentarium of private philanthropic funders of university and of foundation science by which non-market, pure research (including on orphan diseases) would be funded.”
The first question we must ask: What the hell’s an armamentarium? “A collection of resources available for a certain purpose,” answers the New Oxford American, supplying the more concise arsenal as a synonym. But I digress.
Kealey imagines the Bill & Melinda Gates Foundation and other organizations like it stepping in to fund basic research. Right now, he says, money from those organizations are getting crowded out. So how would he answer Bill Gates himself, who in the pages of this magazine called for the U.S. government to triple its energy investment funding? Gates himself says he would only invest (patiently, at great risk, and looking to the long term) “the spin-offs that will come out of that government-funded activity.”
To fit Kealey’s vision, how far back would government funding have to be rolled? In a response to his critics, Kealey approvingly cites a 19th-century scientist who implies the United States should not set standards in high technology. But when the United States has tried this recently, things haven’t turned out so peachily.
In 1984, the Federal Communications Commission declined to declare one standard for stereophonic sound in AM radio, saying the market would decide. The market didn’t. Music stations left the FM band for AM, and, by the early 2000s, there were almost no AM providers still broadcasting in stereo. Through a laissez faire regulatory scheme, the United States lost an entire radio band—and the potential economic value that would result. (Seriously, the consequences of this were huge. Imagine if AM sounded as clear as FM!)
Kealey would reject this example as “anecdotal,” but I think the best questions to pose to any straight-up rejection of government R&D funding are along these lines. That is, the best argument for preserving successful government programs are small-c conservative.
The United States has invented whole industries—biomedicine and networked microprocessor computing—with government funding. Is a theoretical argument about hypothetical private sources of funding a good reason to stop ambitious public investment?
What’s more, when every developed or developing nation like the U.S. invests heavily in research, why should we embark upon intellectual adventurism? China is aiming to exceed U.S. research investment in real dollars by 2020. Is this the time to test out a pretty theory?
It’s not a theory! Consider what American industry has done even in this era of extensive government support for research. The iPhone, for instance—that’s a triumph of privately manufactured high technology.
Sure. But what about the funding of the more fundamental technology in iPhones and the like? As the economist Mariana Mazzucato has argued, many of the technologies that make the iPhone “smart” were brought into the world by government funding. Microchips, computer networking, Siri, and the Global Positioning System were all midwived by U.S. government agencies.
Maybe there’s a midpoint. What if government agencies competed for funding? A market-like environment seems to work well for actual companies. Government agencies and grant-writing organizations should adopt a more competitive, market-like approach.
If this plan works so well, why didn’t the Soviet Union develop the Internet first?
I’ll backtrack. Benjamin Peters, a professor at the University of Tulsa, has written a history of the U.S.S.R.’s furtive attempts to build a nationwide computer network, "How Not to Network a Nation: The Uneasy History of the Soviet Internet." What he found was that the Soviet Union had the same kinds of actors at the same time as the United States: “brilliant scientists, ambitious agencies, committed bureaucrats.”
But here’s what was different: Institutions in the Soviet Union couldn’t, or didn’t want to, collaborate with each other. “These grandiose visions of what networks can do were breaking against the rocks of internecine bureaucratic infighting,” Peters told me.
“How do you get institutions that work together?,” he asked. “That’s what I’m super excited about. If some institutions can take a hit in the name of prudent long-term investments, then government plays an essential role in innovation.”
He also spoke up for the value of institutional hierarchy, and for institutions having clear but limited accountability to their superiors. “A lot of people argue that the Soviet state was too hierarchical,” he said. “But within particular non-military spaces, it wasn’t too hard but too flat.
“The problem with Soviet economic bureaucracy was not that it was too hierarchical and rigid so much as it was too pernicious and unpredictable,” he added in a later email. “One of the counterintuitive virtues of the military-industrial-academic complex in the West is the complex, which permitted cross-agency collaboration.”
So: Would building more competition into funding promote this kind of collaboration? Would it make organizations more likely to work together?
The government invests too much in private companies. The government should handle basic research, and industry should handle development. And anyway, research and development funding is a handout to friends of the current party in power. Look at what happened with Solyndra—they filed for bankruptcy after receiving $528 million in government funding.
First, it’s worth noting that the Energy Department loan program that gave Solyndra that money had recouped its losses and was operating in the black by November 2014. Second, the same program also loaned $465 million to Tesla at the nadir of the recession in 2009—and they’ve paid back the loan and seem to be doing rather well at the moment.
Even so, let’s dig into Tesla’s case for other ways that government funding could be structured. Mariana Mazzucato and others have asserted that the United States should retain equity in companies when they invest with them. That’s especially interesting since so many American-founded companies don’t feed the public coffers through tax revenues. Israel even has a public venture capital firm. (The U.S. does too, but it only serves defense interests.)
Could the United States experiment with techniques like this? With Tesla, in fact, it did: The Energy Department's investment entitled it to “options on 3 million shares of Tesla stock,” according to Slate. But Tesla got out of that entitlement by repaying the government early. Should it not have had the option to do that?
Here are some more questions to think about for the future of government-supported innovation:
How can the United States create a more stable research environment? Peters told me that his research amounts to “two cheers for virtuous governance.” So how can the country move closer to that kind of governance, which supports long-term research? Right now, half of the government research and development budget is spent on defense—is that the right way to foster technology for civilian purposes? (It might be: GPS, after all, started out as a deeply militarized program.)
Should the U.S. move to a closer role between research and development? Right now, most research dollars are federal while much of the cost of development is done in industry. Yet other countries, including Germany and Finland, run a much tighter loop between private and public development. Should America approximate that model—or is it better to keep the two parts separate, as many Republicans assert?
And here’s a final question: If bipartisan support for federal research is so high, and if the world could gobble up much more U.S. debt at historically low interest rates, why don’t we double the government research and development budget?