DOE appeal: Breaking exaflop barrier will require more funding

As China takes the lead in the supercomputer race, the Energy Department asks for continued congressional support in its efforts to advance.

Cielo supercomputer

The Cielo supercomputer at Los Alamos National Laboratory, built by Cray, has a theoretical maximum performance of 1.37 petaflops. (LANL photo)

Department of Energy-funded supercomputers were the first to crack the teraflop- (1997) and petaflop (2008) barriers, but the United States is not likely to be the first nation to break the exaflop barrier without significant increases in DOE funding.

That projection is underscored by China's 55-petaflop Milky Way 2, which has achieved speeds double those of DOE's 27-petaflop Oak Ridge National Laboratory-based Titan, and which took the title of world's fastest supercomputer in June.

China is rapidly stockpiling cash for its supercomputing efforts, while Japan recently invested $1 billion into building an exascale supercomputer – both countries hope to build one by 2020 – and the European Union, Russia and a handful of large private sector companies are all in the mix as well.

DOE's stated goal has also been to develop an exascale supercomputing system – one capable of a quintillion, or 1,000,000,000,000,000,000 floating point operations per second (FLOPS) – by 2020, but developing the technology to make good on that goal would take at least an additional $400 million in funding per year, said Rick Stevens, associate laboratory director at Argonne National Lab.

"At that funding level, we think it's feasible, not guaranteed, but feasible, to deploy a system by 2020," Stevens said, testifying before the House Science, Space and Technology subcommittee on Energy on May 22. He also said that current funding levels wouldn't allow the United States to hit the exascale barrier until around 2025, adding: "Of course, we made those estimates a few years ago when we had more runway than we have now."

DOE's Office of Science requested more than $450 million for its Advanced Scientific Computing Research program in its Fiscal 2014 budget request, while DOE's National Nuclear Security Administration asked for another $400 million for its Advanced Simulation and Computing Campaign. That's a lot of money at a time when sequestration dominates headlines and the government is pinching pennies.

Subcommittee members weighed in on the matter, stressing the importance of supercomputing advancements but with a realistic budgetary sense. Chairman Cynthia Lummis (R-Wyo.) said the government must ensure DOE "efforts to develop an exascale system can be undertaken in concert with other foundational advanced scientific computing activities."

"As we head down this inevitable path to exascale computing, it is important we take time to plan and budget thoroughly to ensure a balanced approach that ensures broad buy-in from the scientific computing community," Lummis said. "The federal government has limited resources and taxpayer funding must be spent on the most impactful projects."

An exascale supercomputer would be 1,000 times more powerful than the IBM Roadrunner, which was the world's fastest supercomputer in 2008. Developed at the Los Alamos National Laboratory with $120 million in DOE funding, it was the first petaflop-scale computer, handling a quadrillion floating operations per second. Yet in just five years, it was rendered obsolete by hundreds of faster supercomputers and powered down, an example of how quickly supercomputing is changing. Supercomputers are getting faster and handling more expansive projects, often in parallel.


Supercomputers through time

Some highlights of the history of supercomputing. Hover mouse over each one for more information.

  • Cray X-MP  1982
  • ASCI RED  1997
  • IBM Roadrunner  2008
  • Watson  2011
  • Titan  2012
  • Milky Way 2  2013

Frank Konkel


The U.S. Postal Service, for instance, uses its mammoth Minnesota-based supercomputer and its 16 terabytes of in-memory computing to compare 6,100 processed pieces of mail per second against a database of 400 billion records in around 50 milliseconds.

Today's supercomputers are exponentially faster than famous forefathers in the 1990s and 2000s.

IBM's Deep Blue, which defeated world champion chess player Gary Kasparov in a best-of-three match in 1997, was one of the 300 fastest supercomputers in the world at the time. At 11.38 gigaflops, Deep Blue calculated 200 million chess moves per second, yet it was 1 million times slower than the now-retired Roadrunner, which was used by DOE's National Nuclear Security Administration used to model the decay of America's nuclear arsenal. Of vital importance to national security before it was decommissioned, Roadrunner essentially predicted whether nuclear weapons – some made decades ago – were operational, allowing a better grasp of the country's nuclear capabilities.

Titan, which operates at a theoretical peak speed of 27 petaflops and is thus 27 times faster than Roadrunner, has been used to simulate complex climate models and simulate nuclear reactions. However, even at its blazing speed, Titan falls well short of completing tasks like simulating whole-Earth climate and weather models with precision. Computer scientists believe, though, that an exascale supercomputer might be able to do it. Such a computer, dissecting enough information, might be able to predict a major weather event like Hurricane Sandy long before it takes full form.

Yet reaching exascale capabilities will not be easy for any country or organization, even those that are well funded, due to a slew of technological challenges that have not yet been solved, including how to power such a system. Using today's CPU technology, powering and cooling an exascale supercomputing system would take 2 gigawatts of power, according to various media reports. That is roughly equivalent to the maximum power output of the Hoover Dam.