I recently had a meeting with executives from Tech Mahindra, an Indian-based IT services company, which was refreshing for the both the candor with which they discussed the overall mechanics of a support and integration model with significant components located half a world away, as well as their insights on the realities and limitations of automation, one of the hottest topics in IT operations today.
On the subject of the mechanics and process behind their global integration process, the eye opener for me was the depth of internal process behind the engagements. The common (possibly only common in my mind since I have had less exposure to these companies than some of my peers) mindset of “develop the specs, send them off and receive code back” is no longer even remotely possible. To perform a successful complex integration project takes a reliable set of processes that can link the efforts of the approximately 20 – 40% of the staff on-site with the client with the supporting teams back in India. Plus a massive investment in project management, development frameworks, and collaboration tools, a hallmark of all of the successful Indian service providers.
From a the client I&O group perspective, the relationship between the outsourcer and internal groups becomes much more than an arms-length process, but rather a tightly integrated team in which the main visible differentiator is who pays their salary rather than any strict team, task or function boundary. For the integrator, this is a strong positive, since it makes it difficult for the client to disengage, and gives the teams early knowledge of changes and new project opportunities. From the client side there are drawbacks and benefits – disengagement is difficult, but knowledge transfer is tightly integrated and efficient.
So far the latter seems to be the prevailing trend as the majority of public cloud platforms and private cloud software solutions start with the foundation of server virtualization. The bare metal options are being positioned more for two purposes:
Auto-provisioning new nodes ofthe cloud - bare metal installation of the cloud solution and the hypervisor
New compute resource types inthe cloud - using new automation capabilities to add a complete physical server to a customer’s cloud tenancy, as if it were just another virtual machine.
I recently spent an hour with Hewlett-Packard executive Stephen DeWitt, a longtime leader at the company who is currently leading up HP’s enterprise marketing efforts. I wanted to learn more about the value proposition of products and services HP is selling to infrastructure & operations professionals and to understand HP’s vision of the future for enterprise customers.
“It’s easy to think of HP as a ‘PC and printing’ company – and we’re obviously a huge player in those traditional product areas – but we have a broader vision for enterprises and for workers…all built around the new style of IT,” Stephen told me. “Our new enterprise campaign, for example, is going to introduce people to the degree of breakthrough innovation we are providing customers today, and how co-innovating with HP can empower your business in the dramatically changing world ahead.”
Q: What’s HP’s overall vision for enterprise solutions? How do you make that vision tangible and concrete for your customers?
HP is a portfolio company, from core to periphery, from cloud to the device. We work very closely with our customers to provide end to end solutions rather than just ad hoc or best of breed products, and we focus on solving for business outcomes and co-innovating with our customers.
Yesterday Intel had a major press and analyst event in San Francisco to talk about their vision for the future of the data center, anchored on what has become in many eyes the virtuous cycle of future infrastructure demand – mobile devices and “the Internet of things” driving cloud resource consumption, which in turn spews out big data which spawns storage and the requirement for yet more computing to analyze it. As usual with these kinds of events from Intel, it was long on serious vision, and strong on strategic positioning but a bit parsimonious on actual future product information with a couple of interesting exceptions.
Content and Core Topics:
No major surprises on the underlying demand-side drivers. The the proliferation of mobile device, the impending Internet of Things and the mountains of big data that they generate will combine to continue to increase demand for cloud-resident infrastructure, particularly servers and storage, both of which present Intel with an opportunity to sell semiconductors. Needless to say, Intel laced their presentations with frequent reminders about who was the king of semiconductor manufacturingJ
Having been away from the Linux scene for a while, I recently took a look at a newer version of Linux, SUSE Enterprise Linux Version 11.3, which is representative of the latest feature sets from the Linux 3.0 et seq kernel available to the entre Linux community, including SUSE, Red Hat, Canonical and others. It is apparent, both from the details on SUSE 11.3 and from perusing the documentation on other distribution providers, that Linux has continued to mature nicely as both a foundation for large scale-out clouds as well as a strong contender for the kind of enterprise workloads that previously were only comfortable on either RISC/UNIX systems or large Microsoft Server systems. In effect, Linux has continued its maturation to the point where its feature set and scalability begin to look like a top-tier UNIX from only a couple of years ago.
Among the enterprise technology that caught my eye:
Scalability – The Linux kernel now scales to 4096 x86 CPUs and up to 16 TB of memory, well into high-end UNIX server territory, and will support the largest x86 servers currently shipping.
I/O – The Linux kernel now includes btrfs (a geeky contraction of “Better File System), an open source file system that promises much of the scalability and feature set of Oracle’s popular ZFS file system including checksums, CoW, snapshotting, advanced logical volume management including thin provisioning and others. The latest releases also include advanced features like geoclustering and remote data replication to support advanced HA topologies.
Ten years ago, open source software (OSS) was more like a toy for independent software vendors (ISVs) in China: Only the geeks in R&D played around with it. However, the software industry has been developing quickly in China throughout the past decade, and technology trends such as service-oriented architecture (SOA), business process management (BPM), cloud computing, the mobile Internet, and big data are driving much broader adoption of OSS.
OSS has become a widely used element of firms’ enterprise architecture. For front-end application architecture on the client side, various open source frameworks, such as jQuery and ExtJS, have been incorporated into many ISVs’ front-end frameworks. On the server side, OSS like Node.js is becoming popular for ISVs in China for high Web throughput capabilities. From an infrastructure and information architecture perspective, open source offerings like Openstack, Cloudstack, and Eucalyptus have been piloted by major telecom carriers including China Telecom and China Unicom, as well as information and communication solution providers like Huawei and IT service providers like CIeNET. To round this out, many startup companies are developing solutions based on MongoDB, an open source NoSQL database.
Familiarity with OSS is becoming a necessary qualification for software developers and product strategy professionals. Because of the wide usage of OSS among both vendors and end users, working experience and extensive knowledge with OSS is becoming a necessary qualification not only for software engineers, but also an important factors for product strategy professionals to establish appropriate product road maps and support their business initiatives.
The industry is abuzz with speculation that IBM will sell its x86 server business to Lenovo. As usual, neither party is talking publicly, but at this point I’d give it a better than even chance, since usually these kind of rumors tend to be based on leaks of real discussions as opposed to being completely delusional fantasies. Usually.
So the obvious question then becomes “Huh?”, or, slightly more eloquently stated, “Why would they do something like that?”. Aside from the possibility that this might all be fantasy, two explanations come to mind:
1. IBM is crazy.
2. IBM is not crazy.
Of the two explanations, I’ll have to lean toward the latter, although we might be dealing with a bit of the “Hey, I’m the new CEO and I’m going to do something really dramatic today” syndrome. IBM sold its PC business to Lenovo to the tune of popular disbelief and dire predictions, and it's doing very well today because it transferred its investments and focus to higher margin business, like servers and services. Lenovo makes low-end servers today that it bootstrapped with IBM licensed technology, and IBM is finding it very hard to compete with Lenovo and other low-cost providers. Maybe the margins on its commodity server business have sunk below some critical internal benchmark for return on investment, and it believes that it can get a better return on its money elsewhere.
In recent research, I have laid out some similarities and differences between tablets and laptops. But the tablet market is growing ever more fragmented, yielding subtleties that aren’t always captured with a simple “PC vs. tablet” dichotomy. As Infrastructure & Operations (I&O) professionals try to determine the composition of their hardware portfolios, the product offerings themselves are more protean. Just describing the “tablet” space is much harder than it used to be. Today, we’re looking at multiple OSes (iOS, Android, Windows, Blackberry, forked Android), form factors (eReader, tablet, hybrid, convertible, touchscreen laptop), and screen sizes (from 5” phabletsand to giant 27” furniture tablets) – not to mention a variety of brands, price points, and applications. If, as rumored, Microsoft were to enter the 7” to 8” space – competing with Google Nexus, Apple iPad Mini, and Kindle Fire HD – we would see even more permutations. Enterprise-specific – some vertically specific – devices are proliferating alongside increased BYO choices for workers.
HP today announced the Moonshot 1500 server, their first official volume product in the Project Moonshot server product family (the initial Redstone, a Calxeda ARM-based server, was only available in limited quantities as a development system), and it represents both a significant product today and a major stake in the ground for future products, both from HP and eventually from competitors. It’s initial attractions – an extreme density low power x86 server platform for a variety of low-to-midrange CPU workloads – hides the fact that it is probably a blueprint for both a family of future products from HP as well as similar products from other vendors.
Geek Stuff – What was Announced
The Moonshot 1500 is a 4.3U enclosure that can contain up to 45 plug-in server cartridges, each one a complete server node with a dual-core Intel Atom 1200 CPU, up to 8 GB of memory and a single disk or SSD device, up to 1 TB, and the servers share common power supplies and cooling. But beyond the density, the real attraction of the MS1500 is its scalable fabric and CPU-agnostic architecture. Embedded in the chassis are multiple fabrics for storage, management and network giving the MS1500 (my acronym, not an official HP label) some of the advantages of a blade server without the advanced management capabilities. At initial shipment, only the network and management fabric will be enabled by the system firmware, with each chassis having up two Gb Ethernet switches (technically they can be configured with one, but nobody will do so), allowing the 45 servers to share uplinks to the enterprise network.