HP was the first US company to create a joint venture subsidiary in China; three decades later, the vendor has become a major player in the country’s consumer and enterprise markets. Among enterprises, HP has strong brand awareness for its server products and services, traditional software solutions, and IT services, but rather less for holistic application life-cycle management (ALM), especially on the mobile side. I think it’s time for technology decision-makers and enterprise architects to seriously consider adopting mobile app delivery management solutions and to evaluate HP for that purpose. Here’s why:
HP’s portfolio now covers the entire mobile app life cycle.The products HP will bring to market as part of its latest strategy will eventually cover the entire mobile application life cycle from app design, development, and optimization to distribution and monitoring. For example, at the design stage, HP Anywhere — based on popular open source product Eclipse — allows developers to write once to multiple devices within its integrated development environment. And its service virtualization feature can help virtualize third-party cloud services and make them consumable across each layer of the system architecture, including web servers, application servers, and web services.
HP’s solution has rich optimization features suitable for Chinese enterprises. At the mobile app optimization stage, HP’s Mobile Center uses a comprehensive approach to functionality, interoperability, usability, performance, and security to consolidate and automate mobile testing. Mobile Center is integrated with LoadRunner, one of the most popular performance engineering tools in Chinese market.
While the timing of the event comes as a surprise, the fact that IBM has decided to unload its technically excellent but unprofitable semiconductor manufacturing operation does not, nor does its choice of Globalfoundries, with whom it has had a longstanding relationship.
A year ago, I blogged about the fact that the app economy was blurring the lines and opening up new opportunities, with a lot of new entrants in the mobile space, be it with mobile CRM and analytics, store analytics, dedicated gaming analytics, etc.
Since 2010, more than 40 companies have raised about $500 million in that space! Watch it closely – consolidation will continue, as evidenced recently by Yahoo’s acquisition of Flurry.
While a lot of innovation is happening on the supply-side, too many marketers have not defined the metrics they’ll use to measure the success of their mobile initiatives. Many lack the tools they need to deeply analyze traffic and behaviors to optimize their performance.
Fifty-seven percent of marketers we surveyed do not have defined mobile objectives. For those who do, goals are not necessarily clearly defined, prioritized, and quantified. Only 38% of marketers surveyed use a mobile analytics solution! Most marketers consider mobile as a loyalty channel: a way to improve customer engagement and increase satisfaction. Marketers must define precisely what they expect their customers to do on their mobile websites or mobile apps, and what actions they would like customers to take, before tracking progress. Too many marketers focus on traffic and app downloads rather than usage and time spent. While 30% of marketers surveyed consider increasing brand awareness as a key objective for their mobile initiatives, only 16% have defined it as a key metric to measure their success!
Very much in the shadows of all the press coverage and hysteria attendant on emerging cloud architectures and customer-facing systems of engagement are the nitty-gritty operational details that lurk like monsters in the swamp of legacy infrastructure, and some of them have teeth. And sometimes these teeth can really take a bite out of the posterior of an unprepared organization.
One of those toothy animals that I&O groups are increasingly encountering in their landscapes is the problem of what to do with Windows Server 2003 (WS2003). It turns out there are still approximately 11 million WS2003 systems running today, with another 10+ million instances running as VM guests. Overall, possibly more than 22 million OS images and a ton of hardware that will need replacing and upgrading. And increasing numbers of organizations have finally begun to take seriously the fact that Microsoft is really going to end support and updates as of July 2015.
Based on the conversations I have been having with our clients, the typical I&O group that is now scrambling to come up with a plan has not been willfully negligent, nor are they stupid. Usually WS2003 servers are legacy servers, quietly running some mature piece of code, often in satellite locations or in the shops of acquired companies. The workloads are a mix of ISV and bespoke code, but it is often a LOB-specific application, with the run-of-the-mill collaboration, infrastructure servers and, etc. having long since migrated to newer platforms. A surprising number of clients have told me that they have identified the servers, but not always the applications or the business owners – often a complex task for an old resource in a large company.
It's never been as challenging for global companies in China as it is right now. First, we've seen a continuous stream of news about the Chinese government requiring greater regulatory governance, starting with the cybersecurity vetting of IT products that relate to national security and public interests in May. Second, leading Chinese Internet companies equipped with emerging technology, such as Alibaba, Baidu, and Tencent, are engaging consumers with enriched products and services, expanding into the enterprise business via innovative business models, and extending their reach from tier-one and tier-two cities to tier-three to tier-six ones.
To gain extensive geographic and vertical coverage in the huge market that is China, vendors have had to engage with partner ecosystems for business operations. Now, it’s even more critical for multinational corporations to enable their local alliances to overcome these disruptions and achieve mutually beneficial strategic business growth. Some vendors have already started doing so, with IBM being a leading example. Its initiatives include:
Launching a strategic partnership with Yonyou. On September 13, 2014, IBM announced the start of its strategic cooperation with Yonyou during the latter's 2014 user conference. IBM will optimize DB2 with BLU Acceleration for various Yonyou products, such as NC (Yonyou’s ERP offering) and its supply chain management, customer relationship management, and human resources management products. In return, Yonyou will offer NC on top of DB2 with BLU acceleration to its customers, based on its evaluation of IBM’s product in June 2013.
[Apologies to all who have just read this post with a sense of deja-vue. I saw a typo, corrected it and then republished the blog, and it reset the publication date. This post was originally published several months ago.]
Having been away from the Linux scene for a while, I recently took a look at a newer version of Linux, SUSE Enterprise Linux Version 11.3, which is representative of the latest feature sets from the Linux 3.0 et seq kernel available to the entre Linux community, including SUSE, Red Hat, Canonical and others. It is apparent, both from the details on SUSE 11.3 and from perusing the documentation on other distribution providers, that Linux has continued to mature nicely as both a foundation for large scale-out clouds as well as a strong contender for the kind of enterprise workloads that previously were only comfortable on either RISC/UNIX systems or large Microsoft Server systems. In effect, Linux has continued its maturation to the point where its feature set and scalability begin to look like a top-tier UNIX from only a couple of years ago.
Among the enterprise technology that caught my eye:
Scalability – The Linux kernel now scales to 4096 x86 CPUs and up to 16 TB of memory, well into high-end UNIX server territory, and will support the largest x86 servers currently shipping.
I/O – The Linux kernel now includes btrfs (a geeky contraction of “Better File System), an open source file system that promises much of the scalability and feature set of Oracle’s popular ZFS file system including checksums, CoW, snapshotting, advanced logical volume management including thin provisioning and others. The latest releases also include advanced features like geoclustering and remote data replication to support advanced HA topologies.
I'm at IDF, a major geekfest for the people interested in the guts of today’s computing infrastructure, and will be immersing myself in the flow for a couple of days. Before going completely off the deep end, I wanted to call out the announcement of the new Xeon E5. While I’ve discussed it in more depth in an accompanying Quick Take just published on our main website, I wanted to add some additional comments on its implications for data center operations, particularly in the areas of capacity planning and long-term capital budgeting.
For many years, each successive iteration of Intel’s and partners’ roadmaps has been quietly delivering a major benefit that seldom gets top billing – additional capacity within the same power and physical footprint, and the resulting ability for users from small enterprises to mega-scale service providers, to defer additional data spending capital expense.
A group of us just published an analysis of VMworld (Breaking Down VMworld), and I thought I’d take this opportunity to add some additional color to the analysis. The report is an excellent synthesis of our analysis, the work of a talented team of collaborators with my two cents thrown in as well, but I wanted to emphasize a few additional impressions, primarily around storage, converged infrastructure, and the overall tone of the show.
First, storage. If they ever need a new name for the show, they might consider “StorageWorld” – it seemed to me that just about every other booth on the show floor was about storage. Cloud storage, flash storage, hybrid storage, cheap storage, smart storage, object storage … you get the picture.[i] Reading about the hyper-growth of storage and the criticality of storage management to the overall operation of a virtualized environment does not drive the concept home in quite the same way as seeing 1000s of show attendees thronging the booths of the storage vendors, large and small, for days on end. Another leading indicator, IMHO, was the “edge of the show” booths, the cheaper booths on the edge of the floor, where smaller startups congregate, which was also well populated with new and small storage vendors – there is certainly no shortage of ambition and vision in the storage technology pipeline for the next few years.
I’ve recently been thinking a lot about application-specific workloads and architectures (Optimize Scalalable Workload-Specific Infrastructure for Customer Experiences), and it got me to thinking about the extremes of the server spectrum – the very small and the very large as they apply to x86 servers. The range, and the variation in intended workloads is pretty spectacular as we diverge from the mean, which for the enterprise means a 2-socket Xeon server, usually in 1U or 2U form factors.
At the bottom, we find really tiny embedded servers, some with very non-traditional packaging. My favorite is probably the technology from Arnouse digital technology, a small boutique that produces computers primarily for military and industrial ruggedized environments.
Slightly bigger than a credit card, their BioDigital server is a rugged embedded server with up to 8 GB of RAM and 128 GB SSD and a very low power footprint. Based on an Atom-class CPU, thus is clearly not the choice for most workloads, but it is an exemplar of what happens when the workload is in a hostile environment and the computer maybe needs to be part of a man-carried or vehicle-mounted portable tactical or field system. While its creators are testing the waters for acceptance as a compute cluster with up to 4000 of them mounted in a standard rack, it’s likely that these will remain a niche product for applications requiring the intersection of small size, extreme ruggedness and complete x86 compatibility, which includes a wide range of applications from military to portable desktop modules.
Companies understand the urgency of ramping up their business technology (BT) capabilities to help the business innovate and grow. Increasingly, they realize that they cannot do this alone and firms will require partners that can help deliver agile services that bring fast and predictable outcomes to the business. For instance, Bharat Light and Power (BLP), one of the largest clean energy generation companies in India, signed in late 2013, a 10-year engagement with IBM to build a new business capability that aims at nothing short of transforming the utility sector in India. In a few words (more details are available in this report), BLP and IBM are creating an open energy service platform that will help BLP understand how to optimize the utilization of its wind turbines. The really interesting part for me lies in the way the company intends to leverage the information generated by this platform as the basis of its competitive advantage. The energy service platform will indeed act as an expertise repository that BLP can leverage to:
Increase the value of its own assets. As the company operates, grows, and optimizes its own asset efficiency, it learns how the climate, power grid, and wind turbines influence a vital business metric for a utility company: the plant load factor (PLF). This will allow the company to generate more revenues from its existing assets.