SHARE – Rooted In The Past, Looking To The Future

Richard Fichera

I spoke today at SHARE’s biannual conference, giving a talk on emerging data center architectures, x86 servers, and internal clouds. SHARE is an organization which describes itself as “representing over 2,000 of IBM's top enterprise computing customers.” In other words, definitely a mainframe geekfest, as described by one attendee. I saw hundreds of people around my age (think waaay over 30), and was able to swap stories of my long-ago IBM mainframe programming experience (that’s what we called “Software Engineering” back when it was FORTRAN, COBOL, PL/1 and BAL. I was astounded to see that IMS was still a going thing, with sessions on the agenda, and in a show of hands, at least a third of the audience reported still running IMS.

Oh well, dinosaurs may survive in some odd corners of the world, and IMS workloads, while not exciting, are a necessary and familiar facet of legacy applications that have decades of stability and embedded culture behind them…

But wait! Look again at the IMS session right next door to my keynote. It was about connecting zLinux and IMS. Other sessions included more zLinux, WebSphere and other seemingly new-age topics. Again, my audience confirmed the sea-change in the mainframe world. Far more compelling than any claims by IBM reps was the audience reaction to a question about zLinux – more than half of them indicated that they currently run zLinux, a response which was much higher than I anticipated. Further discussions after the session with several zLinux users left me with some strong impressions:

Read more

To Get Cloud Economics Right, Think Small, Very, Very Small

James Staten

A startup, who wishes to remain anonymous, is delivering an innovative new business service from an IaaS cloud and most of the time pays next to nothing to do this. This isn't a story about pennies per virtual server per hour - sure they take advantage of that- but more a nuance of cloud optimization any enterprise can follow: reverse capacity planning

Most of us are familiar with the black art of capacity planning. You take an application, simulate load against it, trying to approximate the amount of traffic it will face in production, then provision the resources to accommodate this load. With web applications we tend to capacity plan against expected peak, which is very hard to estimate - even if historical data exists. You capacity plan to peak because you don't want to be overloaded and cause the client to wait, error out, or go to your competition for your service.

Read more

Oracle Confirms Solaris Support On Third-Party Hardware

Richard Fichera

Yesterday Oracle announced that both HP and Dell would certify Solaris on their respective lines of x86 servers, and that Oracle would offer support for Solaris on these systems.

Except for IBM, who was conspicuously missing from the announcement, it's hard to find any losers in this arrangement. HP and Dell customers get the assurance of enterprise support for an OS that they have a long track record with on x86 servers from two of the leading server vendors. Oracle gets a huge potential increase in potential Solaris footprint as Solaris is now available and supported on the leading server platforms, with accompanying opportunities for support revenue and cross-selling of other Oracle products.

All in all, an expected but still welcome development in the Oracle/Sun saga, and one that should make a lot of people happy.

Does anyone have thoughts on IBM's absence?

Does SPARC Have A Future?

Richard Fichera

I have received a number of inquiries on the future of SPARC and Solaris. Sun’s installed base was already getting somewhat nervous as Sun continued to self-destruct with a series of bad calls by management, marginal financial performance, and the cancellation  of its much-touted “Rock” CPU architecture. Coming on top of this long series of negative events, the acquisition by Oracle had much the same effect as throwing a cat into the middle of the Westminster dog show, and Oracle’s public responses were vague enough that they apparently increased rather than decreased customer angst (to be fair, Oracle does not agree with this assessment of customer reaction, and has provided a public list of customers who endorsed the acquisition at http://www.oracle.com/us/sun/030019.htm).

Fast forward to last week at Oracle’s first analyst meeting focused on integrated systems. While much of the content was focused on integrating the software stack and discussions of the new organization, there were some significant nuggets for existing and prospective Solaris and SPARC customers:

Read more

You're Not Yet A Cloud - Get On The Path Today

James Staten

 With all the hype and progress happening around cloud computing, we know that our infrastructure and operations professional clients are under pressure to have a cloud answer. This is causing some unproductive behavior and a lot of defensiveness. A growing trend is to declare victory – point to your virtual infrastructure where you can provision a VM in a few seconds and say, “See, I’m  a cloud.” But you aren’t, really. And I think you know that.

Being a cloud means more than just using server virtualization. It means you have the people, process, and tools in place to deliver IT on demand, via automation, are sharing resources so you can maximize the utilization of assets and are enabling your company to act nimbly. In our latest Forrester report we document that to be a cloud you need to have:

Read more

The Convergence Of IT Automation Solutions

Jean-Pierre Garbani

We are sometimes so focused on details that we forget to think clearly. Nothing new there; it’s still a story about trees and forest. A few years ago, this was clearly the case when I met with one of the first vendors of run book automation. My first thought was that it was very similar to workload automation, but I let myself be convinced that it was so different that it was obviously another product family. Taking a step back last year, I started thinking that in fact these two forms of automation complemented each other. In “Market Overview: Workload Automation, Q3 2009,” I wrote that “executing complex asynchronous applications requires server capacity. The availability of virtualization and server provisioning, one of the key features of today’s IT process [run book] automation, can join forces with workload automation to deliver a seamless execution of tasks, without taxing IT administrators with complex modifications of pre-established plans.”In June of this year, UC4 announced a new feature of its workload automation solution, by which virtual machines or extension to virtual machines can be provisioned automatically when the scheduler detects a performance issue (see my June 30 blog post “Just-In-Time Capacity”). This was a first sign of convergence. But there is more.

Automation is about processes. As soon as we can describe a process using a workflow diagram and a description of the operation to be performed by each step of the diagram, we can implement the software to automate it (as we do in any application or other forms of software development). Automation is but a variation of software that uses pre-developed operations adapted to specific process implementations.

Read more

Standardize Interfaces, Not Technology

Jean-Pierre Garbani

Infrastructure diversity is one important component of many IT infrastructures’ complexity. Even at a time when organizations are standardizing on x86 hardware, they often maintain separate support groups by types of operating systems. In the meantime, we see even more technology diversity developing in a relentless pursuit of performance, and ironically, simplification. This begs a simple question: Should we, for the sake of operational efficiency, standardize at the lowest possible level, e.g., the computing platform, or at a much higher level, e.g., the user interface?

In the past months, I think a clear answer was provided by the mainframe world. One key element that actually limits mainframe expansion in some data centers is the perception from higher levels of management that the mainframe is a complex-to-operate and obsolete platform, too radically different from the Linux and Windows operating systems. This comes from the fact that most mainframe management solutions use an explicit interface for configuration and deployment that requires a detailed knowledge of the mainframe specificity. Mastering it requires skills and experience that unfortunately do not seem to be taught in most computer science classes. Because mainframe education is lacking, the issue seems to be more acute than in other IT segments. This eventually would condemn the mainframe when all the baby boomers decide that they would rather golf in Florida.

 This whole perception was shattered to pieces by two major announcements. The most recent one is the new IBM zEnterprise platform, which regroups a mix of hardware and software platforms under a single administration interface. In doing this, IBM provides a solution that actually abstracts the platforms’ diversity and removes the need for different administrators versed in the vagaries of the different operating systems.

Read more

How Much Infrastructure Integration Should You Allow?

James Staten

 There’s an old adage that the worst running car in the neighborhood belongs to the auto mechanic. Why? Because they like to tinker with it. We as IT pros love building and tinkering with things, too, and at one point we all built our own PC and it probably ran about as well as the mechanic's car down the street.

While the mechanic’s car never ran that well, it wasn’t a reflection on the quality of his work on your car because he drew the line between what he can tinker with and what can sink him as a professional (well, most of the time). IT pros do the same thing. We try not to tinker with computers that will affect our clients or risk the service level agreement we have with them. Yet there is a tinkerer’s mentality in all of us. This mentality is evidenced in our data centers where the desire to configure our own infrastructure and build out our own best of breed solutions has resulted in an overly complex mishmash of technologies, products and management tools. There’s lots of history behind this mess and lots of good intentions, but nearly everyone wants a cleaner way forward.

In the vendors’ minds, this way forward is clearly one that has more of their stuff inside and the latest thinking here is the new converged infrastructure solutions they are marketing, such as HP’s BladeSystem Matrix and IBM’s CloudBurst. Each of these products is the vendor’s vision of a cleaner, more integrated and more efficient data center. And there’s a lot of truth to this in what they have engineered. The big question is whether you should buy into this vision.

Read more

As Cloud Platforms Battle For Credibility, OpenStack Is Pretty Solid

James Staten

It seems every few weeks yet another company announces a cloud computing infrastructure platform. I'm not talking about public clouds but the underlying software which can turn a virtualized infrastructure into an Infrastructure as a Service (IaaS) — whether public or private.

Read more

VMware Embraces Per-VM Pricing - About Time

James Staten

VMware today released an incremental upgrade to its core vSphere platform and took the opportunity to do some product repackaging and pricing actions - the latter being a big win for enterprise customers. The vSphere 4.1 enhancements focused on scalability to accommodate larger and larger virtual pools. The number of VMs per pool and number of hosts and VMs per instance of vCenter have been ratcheted up significantly, which will simplify large environments. The new network and storage I/O features and new memory compression and VMotion improvements will help customers pushing the upper limits of resource utilization. Storage vendors will laud the changes to vStorage too, which finally ends the conflict between what storage functions VMware performs versus what arrays do natively.

The company also telegraphed the end of life for ESX in favor of the more modern ESXi hypervisor architecture. 

But for the majority of VMware shops the pricing changes are perhaps the most significant. It's been a longstanding pain that in order to use some of the key value add management features such as Site Recovery Manager and AppSpeed you had to license them across the full host even if you only wanted to apply that feature to a few VMs. This led to some unnatural behavior such as grouping business critical applications on the same host - cost optimization that trumps availability best practices. Thankfully that has now been corrected. 

Read more