The app economy is blurring the lines and opening up new opportunities, with a lot of new entrants in the mobile space, be it with mobile CRM and analytics, store analytics, dedicated gaming analytics, etc. A bunch of players have raised more than $250+ million among the likes of Flurry, Urban Airship, Crittercism, Kontagent, Trademob, Apsalar, App Annie, and Localytics, to name a few. Expect a lot of innovation and acquisitions in that space once mobile is more naturally integrated into digital marketing strategies.
On average, mobile now represents more than 20% of overall traffic to websites. For some companies, including many in media, more than half of all visits come via mobile devices. In some countries, such as India, mobile has surpassed PC traffic. Marketers are integrating mobile as part of their marketing mix, but too many have not defined the metrics they’ll use to measure the success of their mobile initiatives. Many lack the tools they need to deeply analyze traffic and behaviors to optimize their performance.
Thirty-seven percent of marketers we surveyed do not have defined mobile objectives. For those who do, goals are not necessarily clearly defined, prioritized, and quantified. Half of marketers surveyed have neither defined key performance indicators nor implemented a mobile analytics solution! Most marketers consider mobile as a loyalty channel: a way to improve customer engagement and increase satisfaction. Marketers must define precisely what they expect their customers to do on their mobile websites or mobile apps, and what actions they would like customers to take, before tracking progress.
For the past ten years, the major IT initiative within Chinese organizations has been service oriented and/or process driven architecture. The pace of change has been slow for two reasons: 1) From an end user perspective, related business requirements are not clear or of high priority; 2) more importantly, solutions providers have not been ready to embrace technology innovation and meet emerging technology requirements through new business models.
Times are changing. IBM and other major ISV/SI in China (as well as end users) are driving momentum around emerging technology, such as cloud and enterprise mobility. I recently attended the IBM Technical Summit 2013 in Beijing from July 11 to 12. Here’s what I learned:
Telecom carriers supported by technology vendors will accelerate cloud adoption by SME. Contributing to more than 60% of total GDP in China, small and medium enterprises (SMEs) have always sought to simplify their IT operation as much as possible, and at the same time scale it up when business expands as quickly as possible. IaaS solutions appear to be a perfect match for SMEs; however IT professionals have concerns about the security and data privacy over the operations by other companies.
Yesterday Intel had a major press and analyst event in San Francisco to talk about their vision for the future of the data center, anchored on what has become in many eyes the virtuous cycle of future infrastructure demand – mobile devices and “the Internet of things” driving cloud resource consumption, which in turn spews out big data which spawns storage and the requirement for yet more computing to analyze it. As usual with these kinds of events from Intel, it was long on serious vision, and strong on strategic positioning but a bit parsimonious on actual future product information with a couple of interesting exceptions.
Content and Core Topics:
No major surprises on the underlying demand-side drivers. The the proliferation of mobile device, the impending Internet of Things and the mountains of big data that they generate will combine to continue to increase demand for cloud-resident infrastructure, particularly servers and storage, both of which present Intel with an opportunity to sell semiconductors. Needless to say, Intel laced their presentations with frequent reminders about who was the king of semiconductor manufacturingJ
I concluded my March 2013 report on the role of software assets in business innovation by proposing that “The combination of software assets, strong domain expertise, analytics, and as-a-service delivery models will increasingly allow traditional service providers to reinvent the way they deliver business value to their clients.” I was glad to hear that IBM recently announced a deal with L’Oréal that directly supports this position. The announced engagement actually includes all these components:
The procurement domain expertise of IBM Global Business Services addresses business pain points. L’Oréal USA grew rapidly over the past few years via an aggressive acquisition strategy that caused indirect procurement processes to remain highly disparate. The company knew that there was a significant gap between negotiated savings and realized savings in its indirect procurement operations. IBM GBS consultants brought strong procurement expertise to work with L’Oréal’s existing sourcing team to transform existing processes. IBM Global Process Services (GPS) category experts are working with L’Oréal to develop and implement category sourcing strategies.
IBM has just announced that one of Australia’s “big four” banks, the ANZ, will adopt the IBM Watson technology in their wealth management division for customer service and engagement. Australia has always been an early adopter of new technologies but I’d also like to think that we’re a little smarter and savvier than your average geek back in high school in 1982.
IBM’s Watson announcement is significant, not necessarily because of the sophistication of the Watson technology, but because of IBM's ability to successfully market the Watson concept.
To take us all back a little, the term ‘cognitive computing’ emerged in response to the failings of what was once termed ‘artificial intelligence’. Though the underlying concepts have been around for 50 years or more, AI remains a niche and specialist market with limited applications and a significant trail of failed or aborted projects. That’s not to say that we haven’t seen some sophisticated algorithmic based systems evolve. There’s already a good portfolio of large scale, deep analytic systems developed in the areas of fraud, risk, forensics, medicine, physics and more.
The industry is abuzz with speculation that IBM will sell its x86 server business to Lenovo. As usual, neither party is talking publicly, but at this point I’d give it a better than even chance, since usually these kind of rumors tend to be based on leaks of real discussions as opposed to being completely delusional fantasies. Usually.
So the obvious question then becomes “Huh?”, or, slightly more eloquently stated, “Why would they do something like that?”. Aside from the possibility that this might all be fantasy, two explanations come to mind:
1. IBM is crazy.
2. IBM is not crazy.
Of the two explanations, I’ll have to lean toward the latter, although we might be dealing with a bit of the “Hey, I’m the new CEO and I’m going to do something really dramatic today” syndrome. IBM sold its PC business to Lenovo to the tune of popular disbelief and dire predictions, and it's doing very well today because it transferred its investments and focus to higher margin business, like servers and services. Lenovo makes low-end servers today that it bootstrapped with IBM licensed technology, and IBM is finding it very hard to compete with Lenovo and other low-cost providers. Maybe the margins on its commodity server business have sunk below some critical internal benchmark for return on investment, and it believes that it can get a better return on its money elsewhere.
In his 1956 dystopian sci-fi novel “The City and the Stars”, Arthur C. Clarke puts forth the fundamental design tenet for making eternal machines, “A machine shall have no moving parts”. To someone from the 1950s current computers would appear to come close to that ideal – the CPUs and memory perform silent magic and can, with some ingenuity, be passively cooled, and invisible electronic signals carry information in and out of them to networks and … oops, to rotating disks, still with us after more than five decades[i]. But, as we all know, salvation has appeared on the horizon in the form of solid-state storage, so called flash storage (actually an idea of several decades standing as well, just not affordable until recently).
The initial substitution of flash for conventional storage yields immediate gratification in the form of lower power, maybe lower cost if used effectively, and higher performance, but the ripple effect benefits of flash can be even more pervasive. However, the implementation of the major architectural changes engendered across the whole IT stack by the use of flash is a difficult conceptual challenge for users and largely addressed only piecemeal by most vendors. Enter IBM and its Flashahead initiative.
What is Happening?
On Friday, April 11, IBM announced a major initiative, to the tune of a spending commitment of $1B, to accelerate the use of flash technology by means of three major programs:
· Fundamental flash R&D
· New storage products built on flash-only memory technology
With the next major spin of Intel server CPUs due later this year, HP’s customers have been waiting for HP’s next iteration of its core c-Class BladeSystem, which has been on the market for almost 7 years without any major changes to its basic architecture. IBM made a major enhancement to its BladeCenter architecture, replacing it with the new Pure Systems, and Cisco’s offering is new enough that it should last for at least another three years without a major architectural refresh, leaving HP customers to wonder when HP was going to introduce its next blade enclosure, and whether it would be compatible with current products.
At their partner conference this week, HP announced a range of enhancements to its blade product line that on combination represent a strong evolution of the current product while maintaining compatibility with current investments. This positioning is similar to what IBM did with its BladeCenter to BladeCenter-H upgrade, preserving current customer investment and extending the life of the current server and peripheral modules for several more years.
Tech Stuff – What Was Announced
Among the goodies announced on February 19 was an assortment of performance and functionality enhancements, including:
Platinum enclosure — The centerpiece of the announcement was the new c7000 Platinum enclosure, which boosts the speed of the midplane signal paths from 10 GHz to 14GHz, for an increase of 40% in raw bandwidth of the critical midplane, across which all of the enclosure I/O travels. In addition to the increased bandwidth midplane, the new enclosure incorporates location aware sensors and also doubles the available storage bandwidth.
Emerson Network Power today announced that it is entering into a significant partnership with IBM to both integrate Emerson’s new Trellis DCIM suite into IBM’s ITSM products as well as to jointly sell Trellis to IBM customers. This partnership has the potential to reshape the DCIM market segment for several reasons:
Connection to enterprise IT — Emerson has sold a lot of chillers, UPS and PDU equipment and has tremendous cachet with facilities types, but they don’t have a lot of people who know how to talk IT. IBM has these people in spades.
IBM can use a DCIM offering — IBM, despite being a huge player in the IT infrastructure and data center space, does not have a DCIM product. Its Maximo product seems to be more of a dressed up asset management product, and this partnership is an acknowledgement of the fact that to build a full-fledged DCIM product would have been both expensive and time-consuming.
IBM adds sales bandwidth — My belief is that the development of the DCIM market has been delivery bandwidth constrained. Market leaders Nlyte, Emerson and Schneider do not have enough people to address the emerging total demand, and the host of smaller players are even further behind. IBM has the potential to massively multiply Emerson’s ability to deliver to the market.
As businesses get larger, and the need for effective alignment of the business with technology capabilities grows, enterprise architecture becomes an essential competency. But in China, many CIOs are struggling with setting up a high-performance enterprise architecture program to support their business strategies in a disruptive market landscape. This seems equally true for state-owned enterprises (SOEs) and multinational companies (MNCs).
To gain a better understanding of the problem, I had an interesting conversation with Le Yao, general secretary of Center for Informatization and Information Management (CIIM) and director of the CIO program at Peking University. Le Yao is one of the first pioneers introducing The Open Group Architecture Framework (TOGAF) into China to help address the above challenges. I believe that the five-year journey of TOGAF in China is just an early beginning for EA, and companies in the China market need relevant EA insights to help them support their business:
Taking an EA course is one thing; practicing EA is something else. Companies taking TOGAF courses in China seem to be aiming more at sales enablement than practicing EA internally. MNCs like IBM, Accenture, and HP are more likely to try to infuse the essence of the methodology into their PowerPoint slides for marketing and/or bidding purposes; IBM has also invited channel partners such as Neusoft, Digital China, CS&S, and Asiainfo to take the training.
TOGAF is too high-level to be relevant. End user trainees learning the enterprise architecture framework that Yao’s team introduced in China in 2007 found it to be too high-level and conceptual. Also, the trainers only went through what was written in the textbook without using industry-specific cases or practice-related information — making the training less relevant and difficult to apply.