Oracle Cancels OpenSolaris – What’s The Big Deal?

Richard Fichera

There has been turmoil and angst recently in the 0pen source community of late over Oracle’s decision to cancel OpenSolaris. Since this community can be expected to react violently anytime something is taken out of open source, the real question is whether this action has any impact on real-world IT and operations professionals. The short answer is no.

 Enterprise Solaris users, be they small, medium or large, are using it to run critical applications; and as far as we can tell, the uptake of OpenSolaris as opposed to Solaris supplied and sold by Sun was very low in commercial accounts, other than possibly a surge in test and dev environments. The decision to take Solaris into the open source arena was, in my opinion, fundamentally flawed, and Oracle’s subsequent decision to change this is eminently rational – Oracle’s customers almost certainly are not going to run their companies on an OS that is built and maintained by any open source community (even the vast majority of corporate Linux use is via a distribution supported by a major vendor and under a paid subscription model), and Oracle cannot continue to develop Solaris unless they have absolute control over it, just as is the case with every other enterprise OS. In the same vein, unless Oracle can also have an expectation of being compensated for their investments in future Solaris development, there is little motivation for them to continue to invest heavily in Solaris.

Is Disposing Of, Reselling, Or Recycling End-Of-Life IT Equipment Really Strategic To You?

Doug Washburn

Yesterday, I participated in one of the regular content planning sessions for us analysts on Forrester’s IT Infrastructure & Operation’s Research team. Similar to investment managers and their portfolio of stocks or bonds, we spent time making buy/hold/sell decisions on what we will research more, continue to research, or stop researching. Among the many criteria we use to make these decisions, like client readership, inquiries, or consulting, the strategic relevancy to IT is an important factor to consider. And there was some heated debate around research themes we may phase out down the road…

Enter the discussion on IT asset disposition – or the process of reselling, donating, or recycling end-of-life IT equipment. While every organization eventually has to dispose of its end-of-life IT equipment, it’s long been an afterthought. And the data backs this up. Forrester finds that 80% of organizations globally use their OEM, third parties or a combination of the two for IT asset disposition. But when asked how important IT asset disposition is relative to other IT asset management processes, it’s far and away the least important. As an indicator of this, I recently surveyed over 300 European IT professionals where 77% of respondents ranked IT asset disposition “less important” or “least important.”

This begs the question, is disposing of end-of-life IT equipment really strategic?

Read more

Dell – Ready To Challenge HP And IBM For The High Ground?

Richard Fichera

Historically, the positioning of Dell versus its two major competitors for high-value enterprise business, particularly where it involved complex services and the ability to deliver deeply integrated infrastructure and management stacks, has been as sort of an also ran. Competitors looked at Dell as a price spoiler and a channel for standard storage and networking offerings from its partners, not as a potential threat to the high-ground of being able to deliver complex integrated infrastructure solutions.

This comforting image of Dell as being a glorified box pusher appears to be coming to an end. When my colleague Andrew Reichman recently wrote about Dell’s attempted acquisition of 3Par, it made me take another look at Dell’s recent pattern of investments and the series of announcements they have made around delivering integrated infrastructure with a message and solution offering that looks like it is aimed squarely at HP and IBM's Virtual Fabric.

Consider the overall pattern of investments:

Read more

Complex Event Processing And IT Automation

Jean-Pierre Garbani

Events are, and have been for quite some time, the fundamental elements of IT infrastructure real-time monitoring. Any status changed, threshold crossed in device usage, or step performed in a process generates an event that needs to be reported, analyzed, and acted upon by IT operations.

Historically, the lower layers of IT infrastructure (i.e., network components and hardware platforms) have been regarded as the most prone to hardware and software failures and have therefore been the object of all attention and of most management software investments. In reality, today’s failures are much more likely to be coming from the application and the management of platform and application updates than from the hardware platforms. The increased infrastructure complexity has resulted in a multiplication of events reported on IT management consoles.

Over the years, several solutions have been developed to extract the truth from the clutter of event messages. Network management pioneered solutions such as rule engines and codebook. The idea was to determine, among a group of related events, the original straw that broke the camel’s back. We then moved on to more sophisticated statistical and pattern analysis: Using historical data we could determine what was normal at any given time for a group of parameters. This not only reduces the number of events, it eliminates false alerts and provides a predictive analysis based on parameters’ value evolution in time.

The next step, which has been used in industrial process control and in business activities and is now finding its way into IT management solutions, is complex event processing (CEP). 

Read more

Will Plug-In Hybrids Change The Data Center?

Richard Fichera

In a recent discussion with a group of infrastructure architects, power architecture, especially UPS engineering, was on the table as a topic. There was general agreement that UPS systems are a necessary evil, cumbersome and expensive beasts to put into a DC, and a lot of speculation on alternatives. There was general consensus that the goal was to develop a solution that would be more granular install and deploy and thus allow easier and ad-hoc decisions about which resources to protect, and agreement that battery technologies and current UPS architectures were not optimal for this kind of solution.

So what if someone were to suddenly expand battery technology R&D investment by a factor of maybe 100x of R&D and into battery technology,  expand high-capacity battery production by a giant factor, and drive prices down precipitously? That’s a tall order for today’s UPS industry, but it’s happening now courtesy of the auto industry and the anticipated wave of plug-in hybrid cars. While batteries for cars and batteries for computers certainly have their differences in terms of depth and frequency of charge/discharge cycles, packaging, lifespan, etc, there is little doubt that investments in dense and powerful automotive batteries and power management technology will bleed through into the data center. Throw in recent developments in high-charge capacitors (referred to in the media as “super capacitors”), which add the impedance match between the requirements for spike demands and a chemical battery’s dislike of sudden state changes, and you have all the foundational ingredients for major transformation in the way we think about supplying backup power to our data center components.

Read more

The Need For Speed – GPUs Emerge As Mainstream Accelerators

Richard Fichera

It’s probably fair to say that the computer community is obsessed with speed. After all, our people buy computers to solve problems, and generally the faster the computer, the faster the problem gets solved. The earliest benchmark that I have seen is published in “High Speed Computing Devices, Engineering Resource Devices, McGraw Hill, 1950.” They cite the Marchant desktop calculator as achieving a best-in-class result of 1,350 digits per minute for addition, and the threshold problems then were figuring out how to break down Newton Raphsen equation solvers for maximum computational efficiency. And so the race begins…

Not much has changed since 1950. While our appetites are now expressed in GFLOPs per CPU and TFLOPS per system, users continue to push for escalation of performance in numerically intensive problems. Just as we settled down to a relatively predictable performance model with standard CPUs and cores glued into servers and aggregated into distributed computing architectures of various flavors, along came the notion of attached processors. First appearing in the 1960s and 1970s as attached mainframe vector processors and attached floating point array processors for minicomputers, attached processors have always had a devoted and vocal minority support within the industry. My own brush with them was as a developer using a Floating Point Systems array processor attached to a 32-bit minicomputer to speed up a nuclear reactor core power monitoring application. When all was said and done, the 50X performance advantage of the FPS box had decreased to about 3.5X for the total application. Not bad, but a defeat of expectations. Subsequent brushes with attempts to integrate DSPs with workstations left me a bit jaundiced about the future of attached processors as general purpose accelerators.

Read more

How GlaxoSmithKline Empowered IT Staff To Save ~$1 Million In PC Energy Costs

Doug Washburn

I recently recorded a podcast with GlaxoSmithKline (GSK), the global pharmaceutical company, and their success story of implementing a PC power management initiative that is expected to cut energy costs by ~$1 million per year. While these savings alone should impress any IT executive – especially IT infrastructure and operations professionals who manage PCs – what I found so unique about their story came through my conversation with Matt Bartow, business analyst in GSK’s research and development IT organization, who led this initiative. In particular, GSK is a great example of how “empowering” staff to innovate can industrialize IT operations leading to significant cost savings and green IT benefits.

GSK’s success with PC power management is an outcome of the inspired management style advocated in Forrester’s upcoming book, Empowered. By proactively calling on their employees to spur innovation, GSK tapped into one of their greatest inventive resources – staff, like Matt Bartow, who Forrester would consider a highly empowered and resourceful operative (HERO). But as Empowered explains, HEROes can’t succeed without support from management. By initiating the innovation challenge, GSK’s IT leadership not only identified HEROes in their organization but sourced innovative ideas at the same time. From there, the use of social media technology – in this case, using a wiki-type website with voting capabilities – made it simple for GSK staff to participate while giving them a “say” in the selection process.

So how exactly did PC power management become an IT priority at GSK?

Read more

Benchmarking Consolidation & Virtualization – Variables and Distractions

Richard Fichera

I’ve been getting a number of inquiries recently regarding benchmarking potential savings from consolidating multiple physical servers onto a smaller number of servers using VMs, usually VMware. The variations in the complexity of the existing versus new infrastructures, operating environments, and applications under consideration make it impossible to come up with consistent rules of thumb, and in most cases, also make it very difficult to predict with any accuracy what the final outcome will be absent a very tedious modeling exercise.

However, the major variables that influence the puzzle remain relatively constant, giving us the ability to at least set out a framework to help analyze potential consolidation projects. This list usually includes:

Read more

Is Google Powering Its Data Centers With Wind? No, And 2010 Won't Be The Breakthrough Year For Clean Energy In The DC Either

Doug Washburn

As green IT plans persist through 2010, I'm starting to receive questions from IT infrastructure and operations professionals — particularly data center managers — about the use of cleaner energy sources (e.g. wind, solar, fuel cells, hydro) to power their data center facilities. So when Google recently announced its purchase of 114 megawatts of wind power capacity for the next 20 years from a wind farm in Iowa, I got excited, hopeful of a credible example I could refer to.

But as it turns out, Google will not be using this wind energy to power its data centers. . . yet. Despite Google stating that the wind capacity is enough to power several data centers, their Senior Vice President of Operations, Urs Hoelzle, explains that, "We cannot use this energy directly, so we're reselling it back to the grid in the regional spot market." I confirmed this in electronic conversations with two other industry insiders, Martin LaMonica (CNET News) and Lora Kolodny (GreenTech), who also covered the announcement.

And it's unfortunate since Google's $600 million data center in Council Bluffs, Iowa could likely benefit from the greener, and possibly cheaper, wind energy. But Iowa is a large state and it's likely that distribution of the wind energy is an issue since the Council Bluffs data center appears to be well over a 100 miles away from their wind farms several counties away.

Read more

AP's API Empowers New Media Through AWS And Azure

James Staten

 

It’s no secret traditional news organizations are struggling to stay relevant today in an age where an always-connected generation has little use for newspaper subscriptions and nightly news programs. The Associated Press (AP), the world's oldest and largest news cooperative, is one such organization who has felt the threats which this paradigm shift carries and thus the need to intensify its innovation efforts. However, like many organizations today, its in-house IT Ops and business processes weren’t versatile enough for the kind of innovation needed.

"The business had identified a lot of new opportunities we just weren't able to pursue because our traditional syndication services couldn't support them," said Alan Wintroub, director of development, enterprise application services at the AP, "but the bottom line is that we can't afford not to try this."

To make AP easily accessible for emerging Internet services, social networks, and mobile applications, the nearly 164-year-old news syndicate needed to provide new means of integration that let these customers serve themselves and do more with the content — mash it up with other content, repackage it, reformat it, slice it up, and deliver it in ways AP never could think of —  or certainly never originally intended.

Read more