Is Google Powering Its Data Centers With Wind? No, And 2010 Won't Be The Breakthrough Year For Clean Energy In The DC Either

Doug Washburn

As green IT plans persist through 2010, I'm starting to receive questions from IT infrastructure and operations professionals — particularly data center managers — about the use of cleaner energy sources (e.g. wind, solar, fuel cells, hydro) to power their data center facilities. So when Google recently announced its purchase of 114 megawatts of wind power capacity for the next 20 years from a wind farm in Iowa, I got excited, hopeful of a credible example I could refer to.

But as it turns out, Google will not be using this wind energy to power its data centers. . . yet. Despite Google stating that the wind capacity is enough to power several data centers, their Senior Vice President of Operations, Urs Hoelzle, explains that, "We cannot use this energy directly, so we're reselling it back to the grid in the regional spot market." I confirmed this in electronic conversations with two other industry insiders, Martin LaMonica (CNET News) and Lora Kolodny (GreenTech), who also covered the announcement.

And it's unfortunate since Google's $600 million data center in Council Bluffs, Iowa could likely benefit from the greener, and possibly cheaper, wind energy. But Iowa is a large state and it's likely that distribution of the wind energy is an issue since the Council Bluffs data center appears to be well over a 100 miles away from their wind farms several counties away.

Read more

AP's API Empowers New Media Through AWS And Azure

James Staten

 

It’s no secret traditional news organizations are struggling to stay relevant today in an age where an always-connected generation has little use for newspaper subscriptions and nightly news programs. The Associated Press (AP), the world's oldest and largest news cooperative, is one such organization who has felt the threats which this paradigm shift carries and thus the need to intensify its innovation efforts. However, like many organizations today, its in-house IT Ops and business processes weren’t versatile enough for the kind of innovation needed.

"The business had identified a lot of new opportunities we just weren't able to pursue because our traditional syndication services couldn't support them," said Alan Wintroub, director of development, enterprise application services at the AP, "but the bottom line is that we can't afford not to try this."

To make AP easily accessible for emerging Internet services, social networks, and mobile applications, the nearly 164-year-old news syndicate needed to provide new means of integration that let these customers serve themselves and do more with the content — mash it up with other content, repackage it, reformat it, slice it up, and deliver it in ways AP never could think of —  or certainly never originally intended.

Read more

AMERICAN SYSTEMS Uses BSM To Shift Culture And Advance Business Services And Security

Eveline Oehrlich

Recently I published a business service management (BSM) case study on AMERICAN SYSTEMS. If you're interested in BSM, I highly recommend reading through this report. Although there are many known business alignment success stories, AMERICAN SYSTEMS takes business alignment a step further by aligning IT elements in a way that truly supports its business goals. AMERICAN SYSTEMS sought to improve the delivery and quality of its services to the business, something they were able to accomplish by introducing ITIL and COBIT standards and deploying integrated data center management software. In all, they were able to gain situational awareness, preempt and respond to issues more efficiently, and better protect information assets.

I've outlined a few key highlights from this report below:

First off, AMERICAN SYSTEMS is a government IT innovator that provides engineering, technical, and managed services to government customers. In order to meet the needs of their clients' constant demand for new and better services, they decided to shift from a reactive to a proactive way of managing and operating.

When they set out to make changes they outlined several goals:

Read more

SHARE – Rooted In The Past, Looking To The Future

Richard Fichera

I spoke today at SHARE’s biannual conference, giving a talk on emerging data center architectures, x86 servers, and internal clouds. SHARE is an organization which describes itself as “representing over 2,000 of IBM's top enterprise computing customers.” In other words, definitely a mainframe geekfest, as described by one attendee. I saw hundreds of people around my age (think waaay over 30), and was able to swap stories of my long-ago IBM mainframe programming experience (that’s what we called “Software Engineering” back when it was FORTRAN, COBOL, PL/1 and BAL. I was astounded to see that IMS was still a going thing, with sessions on the agenda, and in a show of hands, at least a third of the audience reported still running IMS.

Oh well, dinosaurs may survive in some odd corners of the world, and IMS workloads, while not exciting, are a necessary and familiar facet of legacy applications that have decades of stability and embedded culture behind them…

But wait! Look again at the IMS session right next door to my keynote. It was about connecting zLinux and IMS. Other sessions included more zLinux, WebSphere and other seemingly new-age topics. Again, my audience confirmed the sea-change in the mainframe world. Far more compelling than any claims by IBM reps was the audience reaction to a question about zLinux – more than half of them indicated that they currently run zLinux, a response which was much higher than I anticipated. Further discussions after the session with several zLinux users left me with some strong impressions:

Read more

To Get Cloud Economics Right, Think Small, Very, Very Small

James Staten

A startup, who wishes to remain anonymous, is delivering an innovative new business service from an IaaS cloud and most of the time pays next to nothing to do this. This isn't a story about pennies per virtual server per hour - sure they take advantage of that- but more a nuance of cloud optimization any enterprise can follow: reverse capacity planning

Most of us are familiar with the black art of capacity planning. You take an application, simulate load against it, trying to approximate the amount of traffic it will face in production, then provision the resources to accommodate this load. With web applications we tend to capacity plan against expected peak, which is very hard to estimate - even if historical data exists. You capacity plan to peak because you don't want to be overloaded and cause the client to wait, error out, or go to your competition for your service.

Read more

Oracle Confirms Solaris Support On Third-Party Hardware

Richard Fichera

Yesterday Oracle announced that both HP and Dell would certify Solaris on their respective lines of x86 servers, and that Oracle would offer support for Solaris on these systems.

Except for IBM, who was conspicuously missing from the announcement, it's hard to find any losers in this arrangement. HP and Dell customers get the assurance of enterprise support for an OS that they have a long track record with on x86 servers from two of the leading server vendors. Oracle gets a huge potential increase in potential Solaris footprint as Solaris is now available and supported on the leading server platforms, with accompanying opportunities for support revenue and cross-selling of other Oracle products.

All in all, an expected but still welcome development in the Oracle/Sun saga, and one that should make a lot of people happy.

Does anyone have thoughts on IBM's absence?

Does SPARC Have A Future?

Richard Fichera

I have received a number of inquiries on the future of SPARC and Solaris. Sun’s installed base was already getting somewhat nervous as Sun continued to self-destruct with a series of bad calls by management, marginal financial performance, and the cancellation  of its much-touted “Rock” CPU architecture. Coming on top of this long series of negative events, the acquisition by Oracle had much the same effect as throwing a cat into the middle of the Westminster dog show, and Oracle’s public responses were vague enough that they apparently increased rather than decreased customer angst (to be fair, Oracle does not agree with this assessment of customer reaction, and has provided a public list of customers who endorsed the acquisition at http://www.oracle.com/us/sun/030019.htm).

Fast forward to last week at Oracle’s first analyst meeting focused on integrated systems. While much of the content was focused on integrating the software stack and discussions of the new organization, there were some significant nuggets for existing and prospective Solaris and SPARC customers:

Read more

You're Not Yet A Cloud - Get On The Path Today

James Staten

 With all the hype and progress happening around cloud computing, we know that our infrastructure and operations professional clients are under pressure to have a cloud answer. This is causing some unproductive behavior and a lot of defensiveness. A growing trend is to declare victory – point to your virtual infrastructure where you can provision a VM in a few seconds and say, “See, I’m  a cloud.” But you aren’t, really. And I think you know that.

Being a cloud means more than just using server virtualization. It means you have the people, process, and tools in place to deliver IT on demand, via automation, are sharing resources so you can maximize the utilization of assets and are enabling your company to act nimbly. In our latest Forrester report we document that to be a cloud you need to have:

Read more

The Convergence Of IT Automation Solutions

Jean-Pierre Garbani

We are sometimes so focused on details that we forget to think clearly. Nothing new there; it’s still a story about trees and forest. A few years ago, this was clearly the case when I met with one of the first vendors of run book automation. My first thought was that it was very similar to workload automation, but I let myself be convinced that it was so different that it was obviously another product family. Taking a step back last year, I started thinking that in fact these two forms of automation complemented each other. In “Market Overview: Workload Automation, Q3 2009,” I wrote that “executing complex asynchronous applications requires server capacity. The availability of virtualization and server provisioning, one of the key features of today’s IT process [run book] automation, can join forces with workload automation to deliver a seamless execution of tasks, without taxing IT administrators with complex modifications of pre-established plans.”In June of this year, UC4 announced a new feature of its workload automation solution, by which virtual machines or extension to virtual machines can be provisioned automatically when the scheduler detects a performance issue (see my June 30 blog post “Just-In-Time Capacity”). This was a first sign of convergence. But there is more.

Automation is about processes. As soon as we can describe a process using a workflow diagram and a description of the operation to be performed by each step of the diagram, we can implement the software to automate it (as we do in any application or other forms of software development). Automation is but a variation of software that uses pre-developed operations adapted to specific process implementations.

Read more

Standardize Interfaces, Not Technology

Jean-Pierre Garbani

Infrastructure diversity is one important component of many IT infrastructures’ complexity. Even at a time when organizations are standardizing on x86 hardware, they often maintain separate support groups by types of operating systems. In the meantime, we see even more technology diversity developing in a relentless pursuit of performance, and ironically, simplification. This begs a simple question: Should we, for the sake of operational efficiency, standardize at the lowest possible level, e.g., the computing platform, or at a much higher level, e.g., the user interface?

In the past months, I think a clear answer was provided by the mainframe world. One key element that actually limits mainframe expansion in some data centers is the perception from higher levels of management that the mainframe is a complex-to-operate and obsolete platform, too radically different from the Linux and Windows operating systems. This comes from the fact that most mainframe management solutions use an explicit interface for configuration and deployment that requires a detailed knowledge of the mainframe specificity. Mastering it requires skills and experience that unfortunately do not seem to be taught in most computer science classes. Because mainframe education is lacking, the issue seems to be more acute than in other IT segments. This eventually would condemn the mainframe when all the baby boomers decide that they would rather golf in Florida.

 This whole perception was shattered to pieces by two major announcements. The most recent one is the new IBM zEnterprise platform, which regroups a mix of hardware and software platforms under a single administration interface. In doing this, IBM provides a solution that actually abstracts the platforms’ diversity and removes the need for different administrators versed in the vagaries of the different operating systems.

Read more