Vendredi 15 Novembre 2024
taille du texte
   
Mercredi, 30 Novembre 2011 00:41

Supercomputers Turn Green in Race to Exascale Mountaintop

Rate this item
(0 Votes)
IBM supercomputer tinted green

Is the world just looking at supercomputers through green-tinted glasses? (Photo: Argonne National Laboratory)

The world’s supercomputers are getting greener. But they better keep it up if they’re going to break the vaunted exascale barrier any time soon.

The latest ranking of the most efficient supercomputers on earth — the biannual Green500 — shows that the greenest supercomputers are getting greener at an increasingly faster rate, thanks in part to the rise of graphics processors in these massive server clusters. But the trend must continue if we’re to reach the widely held goal of building exascale supercomputers that consume a manageable 20 megawatts of power by the end of the decade.

An exascale cluster would be 1,000 times more powerful than today’s fastest supercomputers.

IBM snagged the top five spots in the Green500 with its custom Blue Gene/Q systems, up from the top two in June. But heterogeneous systems — which are made of off-the-shelf x86 CPUs and graphics processor accelerators — claimed a larger chunk near the top of the list. “The GPUs are continuing to dominate,” said Kirk Cameron, a Virginia Tech computer science professor and co-keeper of the Green500 list.

The world’s fastest computer, Japan’s K supercomputer, fell from number 6 to 32 despite slightly increased efficiency. The reason: a new crop of graphics-accelerated systems filling the 6 through 31 slots. And the single supercomputer that made the top 10 in both the Green500 and the Top500 list of the world’s fastest computers, a computer at Tokyo Institute of Technology, also uses graphics accelerators.

The continued rise of graphics supercomputing sets the stage for heated competition between NVIDIA and Intel, said Addison Snell, CEO of Intersect360 Research, a market research firm that specializes in high-performance computing. “Intel versus NVIDIA will be really interesting to watch starting toward the end of 2012 and into 2013.”

NVIDIA has the edge in peak performance and efficiency. But Intel has an advantage in application support and tools, according to Snell. NVIDIA also has a big lead. “About 90% of high-performance computing users that have tried heterogeneous computing have done it on NVIDIA GPU’s,” he said.

Despite its current dominance of the Green500 top 10, IBM is likely to remain a niche player in high-performance computing, said Snell. Big Blue has between five and 10 percent of the processor market for high-performance computing now, and it’s likely to stay at that level, he said.

The most efficient supercomputers are getting more efficient at a faster rate, said Cameron. Eight of the top 10 produced more than 1,000 megaflops per watt, up from three in June, two in November 2010, and none in June 2010. The top four are close to or over the 2,000 megaflops-per-watt mark. “We really jumped in efficiency over the last year,” he said.

There is also good news for high-performance computing users who are looking for a good deal: speedy performance at affordable prices, if you’re willing to wait a bit. Over the first three years of the list — 2007 to 2010 — the efficiency of systems using commodity off-the-shelf parts lagged behind the efficiency of custom systems by 18 to 24 months, said Cameron. So if you’re in the business of building supercomputers and you’re willing to wait 18 to 24 months, he said, you can get efficiencies similar to the really custom, really expensive systems.

According to Cameron, we need the same kind of efficiency improvements that we’ve seen over the last three or four years to get to exascale at 20 megawatts. But this may not happen. The key question is whether the current efficiency gains result more from wringing inefficiencies out of the technology or from true innovation. “Are we just hitting the low hanging fruit and then those trends are going to stop, or are we going to continue to see these accelerating increases in the efficiency year-to-year?” said Cameron.

“I would say that we’re probably mostly low hanging fruit,” he added. And if that’s the case, he said, “we’re in a lot of trouble.” Reaching the exascale goal would then require “a serious paradigm shift.”

Authors:

French (Fr)English (United Kingdom)

Parmi nos clients

mobileporn