IBM Delivers Replicable Business Innovation Services Across Clients

Fred Giron

I concluded my March 2013 report on the role of software assets in business innovation by proposing that “The combination of software assets, strong domain expertise, analytics, and as-a-service delivery models will increasingly allow traditional service providers to reinvent the way they deliver business value to their clients.” I was glad to hear that IBM recently announced a deal with L’Oréal that directly supports this position. The announced engagement actually includes all these components:

  • The procurement domain expertise of IBM Global Business Services addresses business pain points. L’Oréal USA grew rapidly over the past few years via an aggressive acquisition strategy that caused indirect procurement processes to remain highly disparate. The company knew that there was a significant gap between negotiated savings and realized savings in its indirect procurement operations. IBM GBS consultants brought strong procurement expertise to work with L’Oréal’s existing sourcing team to transform existing processes. IBM Global Process Services (GPS) category experts are working with L’Oréal to develop and implement category sourcing strategies.
Read more

Forrester Wave: Public Cloud Platforms -- The Winner Is…

James Staten

…not that simple and therefore not always Amazon Web Services.

First off, we didn’t take what might be construed as the typical approach, which would be to look either at infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) offerings. We combined the two, as the line between these categories is blurring. And historical category leaders have added either infrastructure or platform services that place them where they now straddle these lines.

Further, many people have assumed that all developers will be best served by PaaS products and ill served by IaaS products. Our research has shown for some time that that isn't so: 

  1. Many developers get value from IaaS because it is so flexible, while PaaS products are generally too constraining.
  2. The -aaS labels overlook the actual capabilities of the services available to developers. All PaaS products are not the same; all IaaS are not the same.
  3. Not all developers are the same. Devs will use the services (PLURAL) with the best fit to their skills, needs, and goals.
Read more

IBM Buys SoftLayer, But Will They Learn From Them?

James Staten

IBM didn't just pick up a hosting company with their acquisition of SoftLayer this week, they picked up a sophisticated data center operations team -- one that could teach IBM Global Technical Services (GTS) a thing or two about efficiency when it comes to next-generation cloud data centers. Here's hoping IBM will listen.

Read more

Make no mistake - IBM’s Watson (and others) provide the *illusion* of cognitive computing

IBM has just announced that one of Australia’s “big four” banks, the ANZ, will adopt the IBM Watson technology in their wealth management division for customer service and engagement. Australia has always been an early adopter of new technologies but I’d also like to think that we’re a little smarter and savvier than your average geek back in high school in 1982.

IBM’s Watson announcement is significant, not necessarily because of the sophistication of the Watson technology, but because of IBM's ability to successfully market the Watson concept.   

To take us all back a little, the term ‘cognitive computing’ emerged in response to the failings of what was once termed ‘artificial intelligence’. Though the underlying concepts have been around for 50 years or more, AI remains a niche and specialist market with limited applications and a significant trail of failed or aborted projects. That’s not to say that we haven’t seen some sophisticated algorithmic based systems evolve. There’s already a good portfolio of large scale, deep analytic systems developed in the areas of fraud, risk, forensics, medicine, physics and more.

Read more

Is IBM Selling Its Server Business To Lenovo?

Richard Fichera

 

The industry is abuzz with speculation that IBM will sell its x86 server business to Lenovo. As usual, neither party is talking publicly, but at this point I’d give it a better than even chance, since usually these kind of rumors tend to be based on leaks of real discussions as opposed to being completely delusional fantasies. Usually.

So the obvious question then becomes “Huh?”, or, slightly more eloquently stated, “Why would they do something like that?”. Aside from the possibility that this might all be fantasy, two explanations come to mind:

1. IBM is crazy.

2. IBM is not crazy.

Of the two explanations, I’ll have to lean toward the latter, although we might be dealing with a bit of the “Hey, I’m the new CEO and I’m going to do something really dramatic today” syndrome. IBM sold its PC business to Lenovo to the tune of popular disbelief and dire predictions, and it's doing very well today because it transferred its investments and focus to higher margin business, like servers and services. Lenovo makes low-end servers today that it bootstrapped with IBM licensed technology, and IBM is finding it very hard to compete with Lenovo and other low-cost providers. Maybe the margins on its commodity server business have sunk below some critical internal benchmark for return on investment, and it believes that it can get a better return on its money elsewhere.

Read more

IBM Makes Major Commitment to Flash

Richard Fichera

 

Wisdom from the Past

In his 1956 dystopian sci-fi novel “The City and the Stars”, Arthur C. Clarke puts forth the fundamental design tenet for making eternal machines, “A machine shall have no moving parts”. To someone from the 1950s current computers would appear to come close to that ideal – the CPUs and memory perform silent magic and can, with some ingenuity, be passively cooled, and invisible electronic signals carry information in and out of them to networks and … oops, to rotating disks, still with us after more than five decades[i]. But, as we all know, salvation has appeared on the horizon in the form of solid-state storage, so called flash storage (actually an idea of several decades standing as well, just not affordable until recently).

The initial substitution of flash for conventional storage yields immediate gratification in the form of lower power, maybe lower cost if used effectively, and higher performance, but the ripple effect benefits of flash can be even more pervasive. However, the implementation of the major architectural changes engendered across the whole IT stack by the use of flash is a difficult conceptual challenge for users and largely addressed only piecemeal by most vendors. Enter IBM and its Flashahead initiative.

What is Happening?

On Friday, April 11, IBM announced a major initiative, to the tune of a spending commitment of $1B, to accelerate the use of flash technology by means of three major programs:

·        Fundamental flash R&D

·        New storage products built on flash-only memory technology

Read more

HP Shows its Next Generation Blade and Converged Infrastructure – No Revolution, but Strong Evolution

Richard Fichera

With the next major spin of Intel server CPUs due later this year, HP’s customers have been waiting for HP’s next iteration of its core c-Class BladeSystem, which has been on the market for almost 7 years without any major changes to its basic architecture. IBM made a major enhancement to its BladeCenter architecture, replacing it with the new Pure Systems, and Cisco’s offering is new enough that it should last for at least another three years without a major architectural refresh, leaving HP customers to wonder when HP was going to introduce its next blade enclosure, and whether it would be compatible with current products.

At their partner conference this week, HP announced a range of enhancements to its blade product line that on combination represent a strong evolution of the current product while maintaining compatibility with current investments. This positioning is similar to what IBM did with its BladeCenter to BladeCenter-H upgrade, preserving current customer investment and extending the life of the current server and peripheral modules for several more years.

Tech Stuff – What Was Announced

Among the goodies announced on February 19 was an assortment of performance and functionality enhancements, including:

  • Platinum enclosure — The centerpiece of the announcement was the new c7000 Platinum enclosure, which boosts the speed of the midplane signal paths from 10 GHz to 14GHz, for an increase of 40% in raw bandwidth of the critical midplane, across which all of the enclosure I/O travels. In addition to the increased bandwidth midplane, the new enclosure incorporates location aware sensors and also doubles the available storage bandwidth.
Read more

IBM Embraces Emerson for DCIM – Major Change in DCIM Market Dynamics

Richard Fichera

Emerson Network Power today announced that it is entering into a significant partnership with IBM to both integrate Emerson’s new Trellis DCIM suite into IBM’s ITSM products as well as to jointly sell Trellis to IBM customers. This partnership has the potential to reshape the DCIM market segment for several reasons:

  • Connection to enterprise IT — Emerson has sold a lot of chillers, UPS and PDU equipment and has tremendous cachet with facilities types, but they don’t have a lot of people who know how to talk IT. IBM has these people in spades.
  • IBM can use a DCIM offering  — IBM, despite being a huge player in the IT infrastructure and data center space, does not have a DCIM product. Its Maximo product seems to be more of a dressed up asset management product, and this partnership is an acknowledgement of the fact that to build a full-fledged DCIM product would have been both expensive and time-consuming.
  • IBM adds sales bandwidth — My belief is that the development of the DCIM market has been delivery bandwidth constrained. Market leaders Nlyte, Emerson and Schneider do not have enough people to address the emerging total demand, and the host of smaller players are even further behind. IBM has the potential to massively multiply Emerson’s ability to deliver to the market.
Read more

5-Years Journey Of TOGAF In China Is Just A Beginning For EA

Charlie Dai

As businesses get larger, and the need for effective alignment of the business with technology capabilities grows, enterprise architecture becomes an essential competency. But in China, many CIOs are struggling with setting up a high-performance enterprise architecture program to support their business strategies in a disruptive market landscape. This seems equally true for state-owned enterprises (SOEs) and multinational companies (MNCs).

To gain a better understanding of the problem, I had an interesting conversation with Le Yao, general secretary of Center for Informatization and Information Management (CIIM) and director of the CIO program at Peking University. Le Yao is one of the first pioneers introducing The Open Group Architecture Framework (TOGAF) into China to help address the above challenges. I believe that the five-year journey of TOGAF in China is just an early beginning for EA, and companies in the China market need relevant EA insights to help them support their business:

  • Taking an EA course is one thing; practicing EA is something else. Companies taking TOGAF courses in China seem to be aiming more at sales enablement than practicing EA internally. MNCs like IBM, Accenture, and HP are more likely to try to infuse the essence of the methodology into their PowerPoint slides for marketing and/or bidding purposes; IBM has also invited channel partners such as Neusoft, Digital China, CS&S, and Asiainfo to take the training.
  • TOGAF is too high-level to be relevant. End user trainees learning the enterprise architecture framework that Yao’s team introduced in China in 2007 found it to be too high-level and conceptual. Also, the trainers only went through what was written in the textbook without using industry-specific cases or practice-related information — making the training less relevant and difficult to apply.
Read more

Why Dell Going Private is Less Risk for Customers than their Current Path

David Johnson

To publish this post, I must first discredit myself. I'm 42, and while I love what I do for a living, Michael Dell is 47 and his company was already doing $1 million a day in business by the time he was 31. I look at guys like that and think: "What the h*** have I been doing with my time?!?" Nevertheless, Dell is a company I've followed more closely than any other but Apple since the mid-2000s, and in the past two years I've had the opportunity to meet with several Dell executives and employees - from Montpellier, France to Austin, Texas.

Because I cover both PC hardware as well as client virtualization here at Forrester, it puts me in regular contact with Dell customers who will inevitably ask what we as a firm think about Dell's latest announcements to go private, just as they have for HP these past several quarters since the circus started over there with Mr. Apotheker. Hopefully what follows here is information and analysis that you as an I&O leader can rely on to develop your own perspective on Dell with more clarity.

 
Complexity is Dell's enemy
The complexity of Dell as an organization right now is enormous. They have been on a "Quest" to re-invent themselves and go from PC and server vendor, to an end-to-end solutions vendor with the hope that their chief differentiator could be unique software to drive more repeatable solutions delivery, and in turn lower solutions cost. I say the word 'hope' deliberately because to do that means focusing most of their efforts around a handful of solutions that no other vendor could provide. It's a massive undertaking because as a public company, they have to do this while keeping cash-flow going in their lines of business from each acquisition and growing those while they develop the focused solutions. So far, they haven't.
Read more