Oracle Confirms Solaris Support On Third-Party Hardware

Yesterday Oracle announced that both HP and Dell would certify Solaris on their respective lines of x86 servers, and that Oracle would offer support for Solaris on these systems.

Except for IBM, who was conspicuously missing from the announcement, it's hard to find any losers in this arrangement. HP and Dell customers get the assurance of enterprise support for an OS that they have a long track record with on x86 servers from two of the leading server vendors. Oracle gets a huge potential increase in potential Solaris footprint as Solaris is now available and supported on the leading server platforms, with accompanying opportunities for support revenue and cross-selling of other Oracle products.

All in all, an expected but still welcome development in the Oracle/Sun saga, and one that should make a lot of people happy.

Does anyone have thoughts on IBM's absence?

Does SPARC Have A Future?

I have received a number of inquiries on the future of SPARC and Solaris. Sun’s installed base was already getting somewhat nervous as Sun continued to self-destruct with a series of bad calls by management, marginal financial performance, and the cancellation  of its much-touted “Rock” CPU architecture. Coming on top of this long series of negative events, the acquisition by Oracle had much the same effect as throwing a cat into the middle of the Westminster dog show, and Oracle’s public responses were vague enough that they apparently increased rather than decreased customer angst (to be fair, Oracle does not agree with this assessment of customer reaction, and has provided a public list of customers who endorsed the acquisition at http://www.oracle.com/us/sun/030019.htm).

Fast forward to last week at Oracle’s first analyst meeting focused on integrated systems. While much of the content was focused on integrating the software stack and discussions of the new organization, there were some significant nuggets for existing and prospective Solaris and SPARC customers:

Read more

You're Not Yet A Cloud - Get On The Path Today

 With all the hype and progress happening around cloud computing, we know that our infrastructure and operations professional clients are under pressure to have a cloud answer. This is causing some unproductive behavior and a lot of defensiveness. A growing trend is to declare victory – point to your virtual infrastructure where you can provision a VM in a few seconds and say, “See, I’m  a cloud.” But you aren’t, really. And I think you know that.

Being a cloud means more than just using server virtualization. It means you have the people, process, and tools in place to deliver IT on demand, via automation, are sharing resources so you can maximize the utilization of assets and are enabling your company to act nimbly. In our latest Forrester report we document that to be a cloud you need to have:

Read more

The Convergence Of IT Automation Solutions

We are sometimes so focused on details that we forget to think clearly. Nothing new there; it’s still a story about trees and forest. A few years ago, this was clearly the case when I met with one of the first vendors of run book automation. My first thought was that it was very similar to workload automation, but I let myself be convinced that it was so different that it was obviously another product family. Taking a step back last year, I started thinking that in fact these two forms of automation complemented each other. In “Market Overview: Workload Automation, Q3 2009,” I wrote that “executing complex asynchronous applications requires server capacity. The availability of virtualization and server provisioning, one of the key features of today’s IT process [run book] automation, can join forces with workload automation to deliver a seamless execution of tasks, without taxing IT administrators with complex modifications of pre-established plans.”In June of this year, UC4 announced a new feature of its workload automation solution, by which virtual machines or extension to virtual machines can be provisioned automatically when the scheduler detects a performance issue (see my June 30 blog post “Just-In-Time Capacity”). This was a first sign of convergence. But there is more.

Automation is about processes. As soon as we can describe a process using a workflow diagram and a description of the operation to be performed by each step of the diagram, we can implement the software to automate it (as we do in any application or other forms of software development). Automation is but a variation of software that uses pre-developed operations adapted to specific process implementations.

Read more

Levittown Data Centers All The Rage This Month

In the late 1940s, William Levitt came up with the idea of pre-fabricated homes that could be mass-produced and shipped to suburbs across the US providing cheap and efficient housing. Towns built using these pre-fabricated houses were dubbed Levittowns, and are now known for their drab monotony. In my opinion, pre-fabricated homes were a flop, but the idea of pre-fabricated “Levittown-esque” data centers is brilliant!

And I’m not alone--HP and Colt are just two of the latest providers to jump on the pre-fabricated data center bandwagon this month. Other vendors such as Digital Realty Trust, APC, and IBM have also been offering similar solutions for a while now, but the solutions appear to be a bit more custom-made than the recent announcements by HP and Colt.

The pre-fabricated data center modules are built out in around 750-800kw units and are fitted together like Legos (HP’s even looks like Legos!). Many modular data centers can be linked together (and Colt’s can also stack vertically) to build out a much larger space.

Why should you care about these pre-fabricated data center offerings? Well, they make the whole process of building your own data center much cheaper and faster. Some of the benefits I can see include:

Read more

Categories:

Standardize Interfaces, Not Technology

Infrastructure diversity is one important component of many IT infrastructures’ complexity. Even at a time when organizations are standardizing on x86 hardware, they often maintain separate support groups by types of operating systems. In the meantime, we see even more technology diversity developing in a relentless pursuit of performance, and ironically, simplification. This begs a simple question: Should we, for the sake of operational efficiency, standardize at the lowest possible level, e.g., the computing platform, or at a much higher level, e.g., the user interface?

In the past months, I think a clear answer was provided by the mainframe world. One key element that actually limits mainframe expansion in some data centers is the perception from higher levels of management that the mainframe is a complex-to-operate and obsolete platform, too radically different from the Linux and Windows operating systems. This comes from the fact that most mainframe management solutions use an explicit interface for configuration and deployment that requires a detailed knowledge of the mainframe specificity. Mastering it requires skills and experience that unfortunately do not seem to be taught in most computer science classes. Because mainframe education is lacking, the issue seems to be more acute than in other IT segments. This eventually would condemn the mainframe when all the baby boomers decide that they would rather golf in Florida.

 This whole perception was shattered to pieces by two major announcements. The most recent one is the new IBM zEnterprise platform, which regroups a mix of hardware and software platforms under a single administration interface. In doing this, IBM provides a solution that actually abstracts the platforms’ diversity and removes the need for different administrators versed in the vagaries of the different operating systems.

Read more

How Much Infrastructure Integration Should You Allow?

 There’s an old adage that the worst running car in the neighborhood belongs to the auto mechanic. Why? Because they like to tinker with it. We as IT pros love building and tinkering with things, too, and at one point we all built our own PC and it probably ran about as well as the mechanic's car down the street.

While the mechanic’s car never ran that well, it wasn’t a reflection on the quality of his work on your car because he drew the line between what he can tinker with and what can sink him as a professional (well, most of the time). IT pros do the same thing. We try not to tinker with computers that will affect our clients or risk the service level agreement we have with them. Yet there is a tinkerer’s mentality in all of us. This mentality is evidenced in our data centers where the desire to configure our own infrastructure and build out our own best of breed solutions has resulted in an overly complex mishmash of technologies, products and management tools. There’s lots of history behind this mess and lots of good intentions, but nearly everyone wants a cleaner way forward.

In the vendors’ minds, this way forward is clearly one that has more of their stuff inside and the latest thinking here is the new converged infrastructure solutions they are marketing, such as HP’s BladeSystem Matrix and IBM’s CloudBurst. Each of these products is the vendor’s vision of a cleaner, more integrated and more efficient data center. And there’s a lot of truth to this in what they have engineered. The big question is whether you should buy into this vision.

Read more

Dell Acquires Data Deduplication Vendor Ocarina Networks

On July 19th, Dell announced their intention to acquire data deduplication vendor Ocarina Networks. It’s no surprise that somebody bought Ocarina, as deduplication is one of the hottest technology areas in storage today, with every vendor scrambling to offer tools that can help users contain their massive growth of data and storage footprints. Data deduplication is a form of virtualization that breaks data into chunks, and then compares them and eliminates repeated chunks, replacing them with a pointer to the originally seen version.

NetApp has led the charge in primary storage with their complimentary NetApp deduplication feature within their Data ONTAP operating system.  EMC made a big move into dedupe with their acquisition of Data Domain, mostly used for data backup. So no wonder that competitors of NetApp and EMC are eager to get something going in this hot space. And no wonder that Ocarina was an obvious target with their advanced deduplication algorithms that can eliminate redundant data even in image files with similar coloring, often seen as a particularly tricky file type for deduping. Also, Ocarina is based in an appliance, which means that it can be applied across a whole environment, rather than only on data within a single storage system. This type of functionality is likely to be core to enterprise storage efficiency strategies going forward, and is probably better served up by a large vendor that sells their own storage, rather than as an add on from an unknown quantity.

Read more

Categories:

It’s Time For I&O To Return To A Growth Agenda

If you’re anything like me, you’re probably sweating your way through a pretty hot summer. We are, after all, on pace for the hottest year on record. And unfortunately things are going to get worse. Why? Because it’s that time of year again: Budget season. That’s right – it’s time to start thinking about 2011 and sweating through all the infrastructure and operations projects that need investment.

Fortunately, this year will be different.

I just wrapped up a report looking at I&O budgets heading into 2011 and the outlook is quite positive (you can find a copy of the report here). In fact, the biggest takeaway for me is that IT leaders tell us they’ll finally break the age-old MOOSE stalemate— setting aside 70% of the budget for maintenance of organization, systems, and equipment (i.e. MOOSE or “keeping the lights on”) and 30% for new initiatives (i.e. “innovation”). This year we expect to see only half the budget dedicated to the MOOSE, the usual 30% going to new initiatives, and a surprising 20% or so set aside for business expansion efforts.

So what does this mean for you? Today’s I&O executives must:

Read more

As Cloud Platforms Battle For Credibility, OpenStack Is Pretty Solid

It seems every few weeks yet another company announces a cloud computing infrastructure platform. I'm not talking about public clouds but the underlying software which can turn a virtualized infrastructure into an Infrastructure as a Service (IaaS) — whether public or private.

Read more