Mercredi 27 Novembre 2024
taille du texte
   
Vendredi, 23 Décembre 2011 12:30

Amazon Builds World's Fastest Nonexistent Supercomputer

Rate this item
(0 Votes)
Amazon Builds World's Fastest Nonexistent Supercomputer

Amazon's supercomputer doesn't exist. Except that it does. Photo: MarkHillary/Flickr

The 42nd fastest supercomputer on earth doesn’t exist.

This fall, Amazon built a virtual supercomputer atop its Elastic Compute Cloud — a web service that spins up virtual servers whenever you want them — and this nonexistent mega-machine outraced all but 41 of the world’s real supercomputers.

Yes, beneath Amazon’s virtual supercomputer, there’s real hardware. When all is said and done, it’s a cluster of machines, like any other supercomputer. But that virtual layer means something. This isn’t a supercomputer that Amazon uses for its own purposes. It’s a supercomputer that can be used by anyone.

Amazon is the poster child for the age of cloud computing. Alongside their massive e-tail business, Jeff Bezos and company have built a worldwide network of data centers that gives anyone instant access to all sorts of computing resources, including not only virtual servers but virtual storage and all sorts of other services that can be accessed from any machine on the net. This global infrastructure is so large, it can run one of the fastest supercomputers on earth — even as it’s running thousands upon thousands of other virtual servers for the world’s businesses and developers.

This not only shows the breadth of Amazon’s service. It shows that in the internet age, just about anyone can run a supercomputer-sized application without actually building a supercomputer. “If you wanted to spin up a ten or twenty thousand [processor] core cluster, you could do it with a single mouse click,” says Jason Stowe, the CEO of Cycle Computing, an outfit that helps researchers and businesses run supercomputing applications atop EC2. “Fluid dynamics simulations. Molecular dynamics simulations. Financial analysis. Risk analysis. DNA sequencing. All of those things can run exceptionally well atop the [Amazon EC2 infrastructure].”

And you could do it for a pittance — at least compared to the cost of erecting your own supercomputer. This fall, Cycle Computing setup a virtual supercomputer for an unnamed pharmaceutical giant that spans 30,000 processor cores, and it cost $1,279 an hour. Stowe — who has spent more than two decades in the supercomputing game, working with supercomputers at Carnegie Mellon University and Cornell — says there’s still a need for dedicated supercomputers you install in your own data center, but things are changing.

“I’ve been doing this kind of stuff for awhile,” he says, “and I think that five or 10 years from now, researchers won’t be worrying about administering their own clusters. They’ll be spinning up the infrastructure they need [from services like EC2] to answer the question they have. The days of having your own internal cluster are numbered.”

To Cloud or Not to Cloud

The old guard does not agree. Last month, during a round table discussion at the Four Seasons hotel in San Francisco, many of the companies that help build the world’s supercomputers — including Cray and Penguin Computing — insisted that cloud services can’t match what you get from dedicated cluster when it comes to “high-performance computing,” or HPC. “Cloud for HPC is still hype,” said Charlie Wuischpard, the CEO of Penguin Computing. “You can do some wacky experiments to show you could use HPC in that environment, but it’s really not something you would use today.”

But it is being used today. And Amazon’s climb up the Top 500 supercomputer list shows that EC2 has the capacity to compete with at least the supercomputers that are built with ordinary microprocessors and other commodity hardware parts. “Rather than building your own cluster,” says Jack Dongarra, the University of Tennessee professor who oversees the annual list of the Top 500 supercomputers, “Amazon is an option.”

Amazon’s virtual supercomputer wasn’t nearly as powerful as the massive computing clusters sitting at the peak of the Top 500. It could handle about 240 trillion calculations a second — aka 240 teraflops — while the machine at the top of the list, Japan’s K Computer, reaches 10 quadrillion calculations a second, or 10.51 petaflops. As Dongarra points out, clusters like the K Computer use specialized hardware you won’t find at Amazon or other supercomputers below, say, the top 25 on earth. “The top 25 are rather specialized machines,” Dongarra says. “They’re designed in some sense for a subset of very specialized applications.”

But according to Dongarra, you could still run these specialized applications atop Amazon. They just wouldn’t be quite as fast. And though some researchers and business need are looking for petaflops, others will do just fine with teraflops.

Pages:12 View All

Cade Metz is the editor of Wired Enterprise. Got a NEWS TIP related to this story -- or to anything else in the world of big tech? Please e-mail him: cade_metz at wired.com.

Authors:

French (Fr)English (United Kingdom)

Parmi nos clients

mobileporn