Over the past months server vendors have been announcing benchmark results for systems incorporating Intel’s high-end x86 CPU, the E7, with HP trumping all existing benchmarks with their recently announced numbers (although, as noted in x86 Servers Hit The High Notes, the results are clustered within a few percent each other). HP recently announced new performance numbers for their ProLiant DL980, their high-end 8-socket x86 server using the newest Intel E7 processors. With up to 10 cores, these new processors can bring up to 80 cores to bear on large problems such as database, ERP and other enterprise applications.
The performance results on the SAP SD 2-Tier benchmark, for example, at 25160 SD users, show a performance improvement of 35% over the previous high-water mark of 18635. The results seem to scale almost exactly with the product of core count x clock speed, indicating that both the system hardware and the supporting OS, in this case Windows Server 2008, are not at their scalability limits. This gives us confidence that subsequent spins of the CPU will in turn yield further performance increases before hitting system of OS limitations. Results from other benchmarks show similar patterns as well.
Key takeaways for I&O professionals include:
Expect to see at least 25% to 35% throughput improvements in many workloads with systems based on the latest the high-performance PCUs from Intel. In situations where data center space and cooling resources are constrained this can be a significant boost for a same-footprint upgrade of a high-end system.
For Unix to Linux migrations, target platform scalability continues become less of an issue.
Do you keep every single light on in your house even though you are fast asleep in your bedroom?
Of course you don't. That would be an abject waste. Then why do most firms deploy peak capacity infrastructure resources that run around the clock even though their applications have distinct usage patterns? Sometimes the applications are sleeping (low usage). At other times, they are huffing and puffing under the stampede of glorious customers. The answer is because they have no choice. Application developers and infrastructure operations pros collaborate (call it DevOps if you want) to determine the infrastucture that will be needed to meet peak demand.
One server, two server, three server, four.
The business is happy when the web traffic pedal is to the floor.
Is your cloud strategy centered on saving money or fueling revenue growth? Where you land on this question could determine a lot about your experience level with cloud services and what guidance you should be giving to your application developers and infrastructure & operations teams. According to our research the majority of CIOs would vote for the savings, seeing cloud computing as an evolution of outsourcing and hosting that can drive down capital and operations expenses. In some cases this is correct but in many the opposite will result. Using the cloud wrong may raise your costs.
But this isn’t a debate worth having because it’s the exploration of the use cases where it does save you money that bears the real fruit. And it’s through this experience that you can start shifting your thinking from cost savings to revenue opportunities. Forrester surveys show that the top reasons developers tap into cloud services (and the empowered non-developers in your business units) is to rapidly deploy new services and capabilities. And the drivers behind these efforts – new services, better customer experience and improved productivity. Translation: Revenues and profits.
If the cloud is bringing new money in the door, does it really matter if it’s the cheaper solution? Not at first. But over time using cloud as a revenue engine doesn’t necessarily mean high margins on that revenue. That’s where your experience with the cost advantaged uses of cloud come in.
Forrester took more than a thousand inquiries from clients on cloud computing in 2010, and one of the themes that kept coming up was about which applications they should plan to migrate to infrastructure-as-a-service (IaaS) cloud platforms. The answer: Wrong question.
What enterprises should really be thinking about is how they can take advantage of the economic model presented by cloud platforms with new applications. In fact, the majority of applications we find running on the leading cloud platforms aren't ones that migrated from the data center but ones that were built for the cloud.
A lot of the interest in migrating applications to cloud platforms stems from the belief that clouds are cheaper and therefore moving services to them is a good cost-saving tactic. And sure, public clouds bring economies of scale shared across multiple customers that are thus unachievable by nearly any enterprise. But those cost savings aren't simply passed down. Each public cloud is in the profit-making business and thus shares in the cost savings through margin capture.
For enterprises to make the most of a public cloud platform, they need to ensure that their applications match the economic model presented by public clouds. Otherwise, the cloud may actually cost you more. In our series of reports, "Justify Your Cloud Investment" we detail the sweet spot uses of public cloud platforms that fit these new economics and can help guide you towards these cost advantages.
This week Amazon Web Services announced a new pricing tier for its Elastic Compute Cloud (EC2) service and in doing so has differentiated its offering even further. At first blush the free tier sounds like a free trial, which isn't anything new in cloud computing. True, the free tier is time-limited, but you get 12 months, and capacity limited, along multiple dimensions. But it's also a new pricing band. And for three of its services, SimpleDB, Simple Queueing Service (SQS), and Simple Notification Service (SNS) the free tier is indefinite. Look for Amazon to lift the 12 month limit on this service next October, because the free tier will drive revenues for AWS long term. Here's why:
A few weeks back I posted a story about how one of our clients has been turning cloud economics to their advantage by flipping the concept of capacity planning on its head. Their strategy was to concentrate not on how much capacity they would need when their application got hot, but on how they could reduce its capacity footprint when it wasn't. As small as they could get it, they couldn't shrink it to the point where they incurred no cost at all; they were left with at least a storage and a caching bill. Now with the free tier, they can achieve a no-cost footprint.
On September 15th between 11am-12pm EDT Forrester held an interactive TweetJam on the future of cloud computing including Forrester analysts Jennifer Belissent, Mike Cansfield, Pascal Matzke, Stefan Ried, Peter O’Neill , myself and many other experts and interested participants. Using the hashtag #cloudjam (use this tag to search for the results in Twitter), we asked a variety of questions.
We had a great turnout, with more than 400 tweets (at last count) from over 40 unique Tweeter’s. A high level overview of the key words and topics that were mentioned during the TweetJam is visualized in the attached graphic using the ManyEyes data visualization tool.
Below you will find a short summary of some key takeaways and quotes from the TweetJam:
1. What really is cloud computing? Let’s get rid of 'cloud washing!'
A startup, who wishes to remain anonymous, is delivering an innovative new business service from an IaaS cloud and most of the time pays next to nothing to do this. This isn't a story about pennies per virtual server per hour - sure they take advantage of that- but more a nuance of cloud optimization any enterprise can follow: reverse capacity planning.