The Need For Speed – GPUs Emerge As Mainstream Accelerators

Richard Fichera

It’s probably fair to say that the computer community is obsessed with speed. After all, our people buy computers to solve problems, and generally the faster the computer, the faster the problem gets solved. The earliest benchmark that I have seen is published in “High Speed Computing Devices, Engineering Resource Devices, McGraw Hill, 1950.” They cite the Marchant desktop calculator as achieving a best-in-class result of 1,350 digits per minute for addition, and the threshold problems then were figuring out how to break down Newton Raphsen equation solvers for maximum computational efficiency. And so the race begins…

Not much has changed since 1950. While our appetites are now expressed in GFLOPs per CPU and TFLOPS per system, users continue to push for escalation of performance in numerically intensive problems. Just as we settled down to a relatively predictable performance model with standard CPUs and cores glued into servers and aggregated into distributed computing architectures of various flavors, along came the notion of attached processors. First appearing in the 1960s and 1970s as attached mainframe vector processors and attached floating point array processors for minicomputers, attached processors have always had a devoted and vocal minority support within the industry. My own brush with them was as a developer using a Floating Point Systems array processor attached to a 32-bit minicomputer to speed up a nuclear reactor core power monitoring application. When all was said and done, the 50X performance advantage of the FPS box had decreased to about 3.5X for the total application. Not bad, but a defeat of expectations. Subsequent brushes with attempts to integrate DSPs with workstations left me a bit jaundiced about the future of attached processors as general purpose accelerators.

Read more

How GlaxoSmithKline Empowered IT Staff To Save ~$1 Million In PC Energy Costs

Doug Washburn

I recently recorded a podcast with GlaxoSmithKline (GSK), the global pharmaceutical company, and their success story of implementing a PC power management initiative that is expected to cut energy costs by ~$1 million per year. While these savings alone should impress any IT executive – especially IT infrastructure and operations professionals who manage PCs – what I found so unique about their story came through my conversation with Matt Bartow, business analyst in GSK’s research and development IT organization, who led this initiative. In particular, GSK is a great example of how “empowering” staff to innovate can industrialize IT operations leading to significant cost savings and green IT benefits.

GSK’s success with PC power management is an outcome of the inspired management style advocated in Forrester’s upcoming book, Empowered. By proactively calling on their employees to spur innovation, GSK tapped into one of their greatest inventive resources – staff, like Matt Bartow, who Forrester would consider a highly empowered and resourceful operative (HERO). But as Empowered explains, HEROes can’t succeed without support from management. By initiating the innovation challenge, GSK’s IT leadership not only identified HEROes in their organization but sourced innovative ideas at the same time. From there, the use of social media technology – in this case, using a wiki-type website with voting capabilities – made it simple for GSK staff to participate while giving them a “say” in the selection process.

So how exactly did PC power management become an IT priority at GSK?

Read more

Benchmarking Consolidation & Virtualization – Variables and Distractions

Richard Fichera

I’ve been getting a number of inquiries recently regarding benchmarking potential savings from consolidating multiple physical servers onto a smaller number of servers using VMs, usually VMware. The variations in the complexity of the existing versus new infrastructures, operating environments, and applications under consideration make it impossible to come up with consistent rules of thumb, and in most cases, also make it very difficult to predict with any accuracy what the final outcome will be absent a very tedious modeling exercise.

However, the major variables that influence the puzzle remain relatively constant, giving us the ability to at least set out a framework to help analyze potential consolidation projects. This list usually includes:

Read more

Is Google Powering Its Data Centers With Wind? No, And 2010 Won't Be The Breakthrough Year For Clean Energy In The DC Either

Doug Washburn

As green IT plans persist through 2010, I'm starting to receive questions from IT infrastructure and operations professionals — particularly data center managers — about the use of cleaner energy sources (e.g. wind, solar, fuel cells, hydro) to power their data center facilities. So when Google recently announced its purchase of 114 megawatts of wind power capacity for the next 20 years from a wind farm in Iowa, I got excited, hopeful of a credible example I could refer to.

But as it turns out, Google will not be using this wind energy to power its data centers. . . yet. Despite Google stating that the wind capacity is enough to power several data centers, their Senior Vice President of Operations, Urs Hoelzle, explains that, "We cannot use this energy directly, so we're reselling it back to the grid in the regional spot market." I confirmed this in electronic conversations with two other industry insiders, Martin LaMonica (CNET News) and Lora Kolodny (GreenTech), who also covered the announcement.

And it's unfortunate since Google's $600 million data center in Council Bluffs, Iowa could likely benefit from the greener, and possibly cheaper, wind energy. But Iowa is a large state and it's likely that distribution of the wind energy is an issue since the Council Bluffs data center appears to be well over a 100 miles away from their wind farms several counties away.

Read more

AP's API Empowers New Media Through AWS And Azure

James Staten

 

It’s no secret traditional news organizations are struggling to stay relevant today in an age where an always-connected generation has little use for newspaper subscriptions and nightly news programs. The Associated Press (AP), the world's oldest and largest news cooperative, is one such organization who has felt the threats which this paradigm shift carries and thus the need to intensify its innovation efforts. However, like many organizations today, its in-house IT Ops and business processes weren’t versatile enough for the kind of innovation needed.

"The business had identified a lot of new opportunities we just weren't able to pursue because our traditional syndication services couldn't support them," said Alan Wintroub, director of development, enterprise application services at the AP, "but the bottom line is that we can't afford not to try this."

To make AP easily accessible for emerging Internet services, social networks, and mobile applications, the nearly 164-year-old news syndicate needed to provide new means of integration that let these customers serve themselves and do more with the content — mash it up with other content, repackage it, reformat it, slice it up, and deliver it in ways AP never could think of —  or certainly never originally intended.

Read more

AMERICAN SYSTEMS Uses BSM To Shift Culture And Advance Business Services And Security

Eveline Oehrlich

Recently I published a business service management (BSM) case study on AMERICAN SYSTEMS. If you're interested in BSM, I highly recommend reading through this report. Although there are many known business alignment success stories, AMERICAN SYSTEMS takes business alignment a step further by aligning IT elements in a way that truly supports its business goals. AMERICAN SYSTEMS sought to improve the delivery and quality of its services to the business, something they were able to accomplish by introducing ITIL and COBIT standards and deploying integrated data center management software. In all, they were able to gain situational awareness, preempt and respond to issues more efficiently, and better protect information assets.

I've outlined a few key highlights from this report below:

First off, AMERICAN SYSTEMS is a government IT innovator that provides engineering, technical, and managed services to government customers. In order to meet the needs of their clients' constant demand for new and better services, they decided to shift from a reactive to a proactive way of managing and operating.

When they set out to make changes they outlined several goals:

Read more

SHARE – Rooted In The Past, Looking To The Future

Richard Fichera

I spoke today at SHARE’s biannual conference, giving a talk on emerging data center architectures, x86 servers, and internal clouds. SHARE is an organization which describes itself as “representing over 2,000 of IBM's top enterprise computing customers.” In other words, definitely a mainframe geekfest, as described by one attendee. I saw hundreds of people around my age (think waaay over 30), and was able to swap stories of my long-ago IBM mainframe programming experience (that’s what we called “Software Engineering” back when it was FORTRAN, COBOL, PL/1 and BAL. I was astounded to see that IMS was still a going thing, with sessions on the agenda, and in a show of hands, at least a third of the audience reported still running IMS.

Oh well, dinosaurs may survive in some odd corners of the world, and IMS workloads, while not exciting, are a necessary and familiar facet of legacy applications that have decades of stability and embedded culture behind them…

But wait! Look again at the IMS session right next door to my keynote. It was about connecting zLinux and IMS. Other sessions included more zLinux, WebSphere and other seemingly new-age topics. Again, my audience confirmed the sea-change in the mainframe world. Far more compelling than any claims by IBM reps was the audience reaction to a question about zLinux – more than half of them indicated that they currently run zLinux, a response which was much higher than I anticipated. Further discussions after the session with several zLinux users left me with some strong impressions:

Read more

To Get Cloud Economics Right, Think Small, Very, Very Small

James Staten

A startup, who wishes to remain anonymous, is delivering an innovative new business service from an IaaS cloud and most of the time pays next to nothing to do this. This isn't a story about pennies per virtual server per hour - sure they take advantage of that- but more a nuance of cloud optimization any enterprise can follow: reverse capacity planning

Most of us are familiar with the black art of capacity planning. You take an application, simulate load against it, trying to approximate the amount of traffic it will face in production, then provision the resources to accommodate this load. With web applications we tend to capacity plan against expected peak, which is very hard to estimate - even if historical data exists. You capacity plan to peak because you don't want to be overloaded and cause the client to wait, error out, or go to your competition for your service.

Read more

Oracle Confirms Solaris Support On Third-Party Hardware

Richard Fichera

Yesterday Oracle announced that both HP and Dell would certify Solaris on their respective lines of x86 servers, and that Oracle would offer support for Solaris on these systems.

Except for IBM, who was conspicuously missing from the announcement, it's hard to find any losers in this arrangement. HP and Dell customers get the assurance of enterprise support for an OS that they have a long track record with on x86 servers from two of the leading server vendors. Oracle gets a huge potential increase in potential Solaris footprint as Solaris is now available and supported on the leading server platforms, with accompanying opportunities for support revenue and cross-selling of other Oracle products.

All in all, an expected but still welcome development in the Oracle/Sun saga, and one that should make a lot of people happy.

Does anyone have thoughts on IBM's absence?

Does SPARC Have A Future?

Richard Fichera

I have received a number of inquiries on the future of SPARC and Solaris. Sun’s installed base was already getting somewhat nervous as Sun continued to self-destruct with a series of bad calls by management, marginal financial performance, and the cancellation  of its much-touted “Rock” CPU architecture. Coming on top of this long series of negative events, the acquisition by Oracle had much the same effect as throwing a cat into the middle of the Westminster dog show, and Oracle’s public responses were vague enough that they apparently increased rather than decreased customer angst (to be fair, Oracle does not agree with this assessment of customer reaction, and has provided a public list of customers who endorsed the acquisition at http://www.oracle.com/us/sun/030019.htm).

Fast forward to last week at Oracle’s first analyst meeting focused on integrated systems. While much of the content was focused on integrating the software stack and discussions of the new organization, there were some significant nuggets for existing and prospective Solaris and SPARC customers:

Read more