Lately, I have become a bit obsessed with evaluating the linkage between good process design and good experience design. This obsession was initially sparked by primary research I led earlier this year around reinventing andredesigning business processes for mobile. The mobile imperative is driving a laser focus for companies to create exceptional user experiences for their customers, employees, and partners. But this laser focus on exceptional design is not only reshaping the application development world. This drive for exceptional user experience is also radically changing the way companies approach business process design.
Over the past six months, I have run across more and more BPM teams where user experience is playing a much larger role in driving business process change. Some of these teams highlighted that they see experience design playing a greater role in driving process change than the actual process modeling and analysis aspects of process improvement.
Last month, I attended an IBM Systems and Technology Group (STG) Executive Summit in the US, where IBM outlined its key strategies for accelerating sales in growth markets, including:
· Aggressively marketing PureSystems. IBM is positioning PureSystems (a pre-integrated, converged system of servers, storage, and networking technology with automated self-management and built-in SmartCloud technology) as an integrated and simplified data center offering to help organizations reduce the money and time they spend on the management and administration of servers.
· Continuing to expand in “tier two” cities. Over the next 12 months, IBM plans to continue its expansion outside of major metropolitan areas by opening small branches in nearly 100 locations in growth markets, most notably India, China, Brazil, and Russia.
· Expanding channel capabilities and accelerating new routes to market. IBMplans to certify 2,800 global resellers on PureSystems in 2013 and upgrade the solution and technical expertise of 500 of its partners. Also, the company plans to drive the revenue of managed service providers (MSPs) by working with them closely to develop cloud-based services and solutions on PureSystems.
Considering the vast potential demand from growth markets and slowdown in developed markets, IBM is among the growing camp of multinational vendors aggressively targeting them as an engine for future business. Some of my key observations on IBMs event and recent announcements:
HP seems to be on a tear, bouncing from litigation with one of its historically strongest partners to multiple CEOs in the last few years, continued layoffs, and a recent massive write-down of its EDS purchase. And, as we learned last week, the circus has not left town. The latest “oops” is an $8.8 billion write-down for its purchase of Autonomy, under the brief and ill-fated leadership of Léo Apotheker, combined with allegations of serious fraud on the part of Autonomy during the acquisition process.
The eventual outcome of this latest fiasco will be fun to watch, with many interesting sideshows along the way, including:
Whose fault is it? Can they blame it on Léo, or will it spill over onto Meg Whitman, who was on the board and approved it?
Was there really fraud involved?
If so, how did HP miss it? What about all the internal and external people involved in due diligence of this acquisition? I’ve been on the inside of attempted acquisitions at HP, and there were always many more people around with the power to say “no” than there were people who were trying to move the company forward with innovative acquisitions, and the most persistent and compulsive of the group were the various finance groups involved. It’s really hard to see how they could have missed a little $5 billion discrepancy in revenues, but that’s just my opinion — I was usually the one trying to get around the finance guys. :)
Nathan Bedford Forrest, a Confederate general of despicable ideology and consummate tactics, spoke of “keepin up the skeer,” applying continued pressure to opponents to prevent them from regrouping and counterattacking. POWER7+, the most recent version of IBM’s POWER architecture, anticipated as a follow-up to the POWER7 for almost a year, was finally announced this week, and appears to be “keepin up the skeer” in terms of its competitive potential for IBM POWER-based systems. In short, it is a hot piece of technology that will keep existing IBM users happy and should help IBM maintain its impressive momentum in the Unix systems segment.
For the chip heads, the CPU is implemented in a 32 NM process, the same as Intel’s upcoming Poulson, and embodies some interesting evolutions in high-end chip design, including:
Use of DRAM instead of SRAM — IBM has pioneered the use of embedded DRAM (eDRAM) as embedded L3 cache instead of the more standard and faster SRAM. In exchange for the loss of speed, eDRAM requires fewer transistors and lower power, allowing IBM to pack a total of 80 MB (a lot) of shared L3 cache, far more than any other product has ever sported.
[For some reason this has been unpublished since April — so here it is well after AMD announced its next spin of the SeaMicro product.]
At its recent financial analyst day, AMD indicated that it intended to differentiate itself by creating products that were advantaged in niche markets, with specific mention, among other segments, of servers, and to generally shake up the trench warfare that has had it on the losing side of its lifelong battle with Intel (my interpretation, not AMD management’s words). Today, at least for the server side of the business, it made a move that can potentially offer it visibility and differentiation by acquiring innovative server startup SeaMicro.
SeaMicro has attracted our attention since its appearance (blog post 1, blog post 2) with its innovative architecture that dramatically reduces power and improves density by sharing components like I/O adapters, disks, and even BIOS over a proprietary fabric. The irony here is that SeaMicro came to market with a tight alignment with Intel, who at one point even introduced a special dual-core packaging of its Atom CPU to allow SeaMicro to improve its density and power efficiency. Most recently SeaMicro and Intel announced a new model that featured Xeon CPUs to address the more mainstream segments that were not a part of SeaMicro’s original Atom-based offering.
This week, the New York Times ran a series of articles about data center power use (and abuse) “Power, Pollution and the Internet” (http://nyti.ms/Ojd9BV) and “Data Barns in a Farm Town, Gobbling Power and Flexing Muscle” (http://nyti.ms/RQDb0a). Among the claims made in the articles were that data centers were “only using 6 to 12 % of the energy powering their servers to deliver useful computation. Like a lot of media broadsides, the reality is more complex than the dramatic claims made in these articles. Technically they are correct in claiming that of the electricity going to a server, only a very small fraction is used to perform useful work, but this dramatic claim is not a fair representation of the overall efficiency picture. The Times analysis fails to take into consideration that not all of the power in the data center goes to servers, so the claim of 6% efficiency of the servers is not representative of the real operational efficiency of the complete data center.
On the other hand, while I think the Times chooses drama over even-keeled reporting, the actual picture for even a well-run data center is not as good as its proponents would claim. Consider:
A new data center with a PUE of 1.2 (very efficient), with 83% of the power going to IT workloads.
Then assume that 60% of the remaining power goes to servers (storage and network get the rest), for a net of almost 50% of the power going into servers. If the servers are running at an average utilization of 10%, then only 10% of 50%, or 5% of the power is actually going to real IT processing. Of course, the real "IT number" is the server + plus storage + network, so depending on how you account for them, the IT usage could be as high as 38% (.83*.4 + .05).
At last week’s India analyst briefing, IBM outlined urbanization as a key factor driving “Smarter Cities” initiatives in India.
IBM expects India to invest about $1.2 trillion over the next 20 years in areas like transportation, energy, and public security. The second phase of the Jawaharlal Nehru National Urban Renewal Mission (JnNURM), which covers $40 billion in infrastructure-related projects, will play a key role in improving the country’s infrastructure capacity to support urbanization over the next five years. IBM is currently working on approximately 3,000 smart city projects globally; of those, about 30 pilot projects are from India.
About a year ago, I published a report on how cities are undergoing rapid transformation and creating massive opportunities for ICT vendors across Asia Pacific. Although the urbanization rate still stands at about 30% in India, it is growing fast. Also, an increasingly Internet-savvy population is demanding better citizen services. Indian state and city governments will make investments to build infrastructure on a large scale to meet the needs of their surging urban populations, creating opportunities for vendors. —
Despite the promise and opportunities that India provides for Smarter City initiatives, IBM has to deal with key challenges:
I’m thrilled to see “people” talked about as a major focus of business. Company executives recognize that people are critical to sustainable organizational growth. Talent is now a C-level priority. People development is a responsibility of all managers and leaders, not just the HR department. Great to hear! Vendors see talent management as a hot space and are strategically lining up to meet business needs — enter IBM!
At a CIO roundtable that Forrester held recently in Sydney, I presented one of my favourite slides (originally seen in a deck from my colleague Ted Schadler) about what has happened r.e. technology since January 2007 (a little over five years ago). The slide goes like this:
Source: Forrester Research, 2012
This makes me wonder: what the next five years will hold for us? Forecasts tend to be made assuming most things remain the same – and I bet in 2007 few people saw all of these changes coming… What unforeseen changes might we see?
Will the whole concept of the enterprise disappear as barriers to entry disappear across many market segments?
Will the next generation reject the “public persona” that is typical in the Facebook generation and perhaps return to “traditional values”?
How will markets respond to the aging consumer in nearly every economy?
How will environmental concerns play out in consumer and business technology purchases and deployments?
How will the changing face of cities change consumer behaviors and demands?
Will artificial intelligence (AI) technologies and capabilities completely redefine business?
Only a few months since I authored Forrester’s "Market Overview: Data Center Infrastructure Management Solutions," significant changes merit some additional commentary.
The major vendor drama of the “season” is the continued evolution of Schneider and Emerson’s DCIM product rollout. Since Schneider’s worldwide analyst conference in Paris last week, we now have pretty good visibility into both major vendors' strategy and products. In a nutshell, we have two very large players, both with large installed bases of data center customers, and both selling a vision of an integrated modular DCIM framework. More importantly it appears that both vendors can deliver on this promise. That is the good news. The bad news is that their offerings are highly overlapped, and for most potential customers the choice will be a difficult one. My working theory is that whoever has the largest footprint of equipment will have an advantage, and that a lot depends on the relative execution of their field marketing and sales organizations as both companies rush to turn 1000s of salespeople and partners loose on the world with these products. This will be a classic market share play, with the smart strategy being to sacrifice margin for market share, since DCIM solutions have a high probability of pulling through services, and usually involve some annuity revenue stream from support and update fees.