Benchmarking for cloud computing
By Carl Brooks, Technology Writer, SearchCloudComputing.com 06-Jul-2010
Just about everywhere you go in the IT world, benchmarks—tests that perform the same procedure against competing devices—are ubiquitous. You can find in-depth revelations on the performance of monitors, hard drives, CPU, chops and embedded components of all types, down to the nanosecond and in great detail.
|"I suspect many CTOs and architects at these cloud providers are thinking about how they can get their CloudHarmony numbers up." |
-- John Treadway, the director of cloud computing at Unisys
Not so in cloud computing; there's no way for a consumer to know exactly what it's getting when it buys, for example, the m1.small CPU instance from Amazon Web Services (AWS). But as independent researchers and analysts try and test the cloud—including detailed performance data on notoriously test-phobic VMware—that situation has begun to change.
"There just isn't quantifiable information out there," said Jason Read, an IT infrastructure and software consultant and ex-IBMer. Read has busily tried to pick up the slack by running what may be the first comprehensive benchmarks for cloud computing out there and publishing them on his Website, CloudHarmony.com.
Emerging cloud benchmarks
In February, Read first published the cloud "speed test" with AWS' Mechanical Turk job brokerage service by paying random users around the world a few cents to test data transit speeds. Amazon and IBM had double the transit capacity of the worst performers, and the results showed a massive disparity in performance dependent on location.
Read has gone on to post comprehensive tests of CPU performance and disk I/O speeds for more than 20 providers side-by-side, the first independently produced metrics of what a cloud providers (and some traditional hosters) can offer. Providers almost universally offer a CPU designation and price it according to capacity, but there are no rules about what constitutes a "small" or "large" CPU and—until now—no good way to judge what providers actually deliver.
The benchmarks use a battery of common tools, such as the Phoronix Test Suite for disk I/O and POV-Ray and Super PI for CPU tests. Read says that he has collected dozens of benchmark tools into a "cloud benchmark suite."
"We tried all the standard benchmarks that seemed like they'd be relevant," he said.
CloudHarmony.com generated thousands of results on hundreds of different configurations available to cloud consumers, and Read said that it helped him form a nuanced view of how providers operate. For example, he noted that "it's pretty easy to tell who's using local-attached storage and who's using network storage" by disk I/O under some tests, but not others. And as the size of a CPU instance increased, Amazon's I/O performance also increased, which didn't occur in the case of other providers, he said.
Many providers do not reveal details about their infrastructure (although some, like Rackspace, share details fairly freely). Read said that users can review his benchmarks for clues, which may help them decide whether one provider or another is better suited to their needs.