Dimanche 22 Décembre 2024
taille du texte
   
Mercredi, 02 Novembre 2011 05:00

Big Data, Fast Slow: Why HP's Project Moonshot Matters

Rate this item
(0 Votes)
Big Data, Fast Slow: Why HP's Project Moonshot Matters

HP's Project Moonshot Server. Source: Jon Snyder

On Tuesday, HP unveiled a major shift in its datacenter strategy, announcing that it would begin to build what it calls “extreme low-power servers” based on chips that were originally designed for phones, tablets, and netbooks. HP clearly thought they were announcing something big, as the company refused to pre-brief any journalists on the specifics of what they would be unveiling. Even when I showed up at the event itself, I heard a journalist from a major media outlet pressing one of the HP PR team for details before the event, and all she did was tell him to sit down and watch the show. The response among the press core was muted, though, as most of us had been anticipating something like this, and it didn’t seem to be thatbig a deal.

So was it a big deal? I think the answer is yes.

Real-time vs. batch

Before I get into what Project Moonshot is, it’s worth setting some context by taking a look at the rationale for HP’s big shift. My aha moment, where I really understood the forces at work behind Moonshot and indeed behind the recent acceleration of the entire “microserver” or “physicalization” trend, came through a presentation by Twitter’s Nathan Marz.

In Marz’s presentation, which describes how Twitter’s Storm project complements Hadoop in the company’s analytics efforts, Marz says in essence (and here I’m heavily paraphrasing and expanding) that there are really two types of “Big Data”: fast and slow.

Fast “Big Data” is real-time analytics, where messages are parsed and for some kind of significance as they come in at wire speed. In this type of analytics, you apply a set of pre-developed algorithms and tools to the incoming datastream, looking for events that match certain patterns so that your platform can react in real time. A few examples: Twitter runs real-time analytics on the Twitter firehose in order to identify trending topics; Topsy runs real-time analytics on the same Twitter firehose in order to identify new topics and links that people are discussing, so that it can populate its search index; a high-frequency trader runs real-time analytics on market data in order to identify short-term (often in the millisecond range) market trends so that it can turn a tiny, quick profit.

Real-time analytics workloads are have a few common characteristics, the most important of which is that they are latency sensitive and compute-bound. These workloads are also bandwidth intensive in that the compute part of the platform can process more data than storage and I/O can feed it (hence the compute bottleneck). People doing real-time analytics need lots and lots of CPU horsepower (and even GPU horsepower in the case of HFT), and they keep as much data as they can in RAM so that they’re not bottlenecked by disk I/O.

Twitter’s Storm is one solution that’s aimed at this type of fast Big Data, but there are others, most of which fall under the heading of event processing.

In order to make fast Big Data useful, you need a way to develop and refine the tools and algorithms that your real-time analytics package can run at lightning speed on the incoming data. This is where slow Big Data — which is what most people are talking about when they simply say “Big Data” — comes in.

With slow Big Data, you use platform like Hadoop to gather information and test hypotheses by running queries against a vast backlog of historical data. This type of Big Data spends most of its time waiting on mass storage to get back to it, so it’s I/O-bound and not compute-bound. Indeed, you might describe the Hadoop usage models as, “think really hard about a problem, formulate a set of questions for which the answers will help you model the problem, and then wait for your Hadoop cluster to load your data archive and ask it the batch of questions you just submitted.”

Because this type of batch workload is neither latency sensitive nor compute-bound, it’s a great fit for a dense cluster of cheap, low-power, low-performance processors hooked up to commodity SATA and Ethernet interfaces. The low-power cores can wait for data to arrive from the disk or network interface, and then they can swarm it like ants until it’s reduced to something that’s human-usable.

The output of this slow Big Data process is a set of tested hypotheses, models, algorithms, and statistical tools that can then be applied as inputs to the real-time analytics platform.

Big Data, Fast Slow: Why HP's Project Moonshot Matters

I’ve drawn a quick and dirty diagram of this process, above. As you can see, the bottlenecks for Hadoop are the disk I/O from the data archive and the human brain’s ability to form hypotheses and turn them into queries. The first bottleneck can be addressed with SSD, while fixing the second is the job of the growing stack of more human-friendly tools that now sits atop Hadoop.

Right now, the bulk of analytics that are to be done on platforms like Twitter and Facebook fall into the “slow Big Data” category, hence the booming popularity of Hadoop. But current server hardware is very ill suited to this workload category, because it’s overpowered and overpriced. So there is a significant opportunity in the server space for exactly what HP has announced with Project Moonshot.

Project Moonshot’s prospects

Project Moonshot has three main components, or what HP called “three key pillars.” The first of these components is the Redstone server development platform, which is what HP unveiled today (literally — the HP presenter pulled a cloak off it). What was shown today was a non-functional prototype. (I confirmed that the prototype on display was nonfunctional by talking to the CEO of Calxeda; to make it functional you’ll have to run a bunch of SATA ribbons from each of the EnergyCard modules to each of the SATA hard drive modules, because Calxeda’s proprietary board interface doesn’t carry SATA traffic.)

This Calxeda-made, ARM-based prototype is only the first system that HP will launch, making it available to customers in the second quarter of 2011. x86-based Redstone systems are definitely on the roadmap, and an HP exec confirmed that a forthcoming version of Redstone will feature Intel’s Atom processors.

AMD was also at the event, so it’s likely that we’ll see a Bobcat-based Redstone iteration, as well. I talked to an AMD’s Phil Hughes about this, and he wouldn’t give any details of HP’s and AMD’s cooperation, other than to reiterate that they’ve joined HP’s Pathfinder Program and are watching Moonshot closely.

This brings me to the second component of Moonshot: Project Pathfinder. Pathfinder is a group of HP partners who are going to be helping HP develop an ecosystem around Moonshot. Right now, Pathfinder includes Red Hat, Calxeda, Canonical, AMD, and ARM. There was no one from Red Hat at the event, so my colleague Bob McMillan couldn’t get anyone to confirm that there will be an ARM port of RHEL, but it seems likely that this is in the works.

Needless to say, Canonical is all over this, and will be aggressively upping support for ARM in the next LTS release of Ubuntu, which is 12.04.

Intel was completely absent from the event and from the list of Pathfinder partners, despite the chipmaker’s commitment to seeing Atom play in this space. I’ll be talking to Intel later this week about what their plans are for the microserver/physicalization segment, though, so stay tuned.

The final key element of Moonshot is HP’s validation program. It will take a while for everyone to get their entire Big Data stack up and running on ARM — we’re still waiting on some of the Java pieces to fall into place on Ubuntu. So HP will provide remote access to the Moonshot hardware so customers and would-be customers can develop software on a live Moonshot system. This should help accelerate the growth of the ecosystem around this platform.

In all, HP’s move is another huge validation of Hadoop, and it’s also a necessary step in the evolution of analytics. By giving slower, batch-based analytics a platform all its own, the company is positioned to serve a red-hot segment that is just now coming into its own. Meanwhile, fast Big Data will always be able to get the number-crunching horsepower that it needs from GPUs and high-performance x86 CPUs.

Authors:

French (Fr)English (United Kingdom)

Parmi nos clients

mobileporn