Very much in the shadows of all the press coverage and hysteria attendant on emerging cloud architectures and customer-facing systems of engagement are the nitty-gritty operational details that lurk like monsters in the swamp of legacy infrastructure, and some of them have teeth. And sometimes these teeth can really take a bite out of the posterior of an unprepared organization.
One of those toothy animals that I&O groups are increasingly encountering in their landscapes is the problem of what to do with Windows Server 2003 (WS2003). It turns out there are still approximately 11 million WS2003 systems running today, with another 10+ million instances running as VM guests. Overall, possibly more than 22 million OS images and a ton of hardware that will need replacing and upgrading. And increasing numbers of organizations have finally begun to take seriously the fact that Microsoft is really going to end support and updates as of July 2015.
Based on the conversations I have been having with our clients, the typical I&O group that is now scrambling to come up with a plan has not been willfully negligent, nor are they stupid. Usually WS2003 servers are legacy servers, quietly running some mature piece of code, often in satellite locations or in the shops of acquired companies. The workloads are a mix of ISV and bespoke code, but it is often a LOB-specific application, with the run-of-the-mill collaboration, infrastructure servers and, etc. having long since migrated to newer platforms. A surprising number of clients have told me that they have identified the servers, but not always the applications or the business owners – often a complex task for an old resource in a large company.
I’ve noticed a growing trend among Asia Pacific organizations over the past 6-12 months: complete IT resistance to SaaS has steadily given way to more pragmatic discussions, even if IT has come to the table grudgingly. Over the next two years I expect this trend to accelerate. In fact, I believe that many SaaS solutions, particularly those that cross business and functional boundaries, will be rapidly subsumed within the broader IT portfolio, even if they were originally sourced outside IT.
Many SaaS vendors report already seeing more IT involvement in procurement, requirements definition, RFP creation, and negotiations. The clear procurement guidelines published by the IT department of the Australian Government Information Management Office (AGIMO) is one high profile example. Don’t get me wrong, in most instances business decision-makers will still lead, particularly in identifying the required business processes and determining how best to consume SaaS-based services. But IT decision-makers are getting more involved, particularly around integration.
Some areas to consider as you look to work more closely with business decision-makers to evaluate and negotiate SaaS and other public cloud deals:
Retail is experiencing substantial change because consumers are now empowered by the web with information about price, availability, and merchandise features.
The retail industry is still served by solutions that are too fragmented to adequately balance the asymmetry introduced by radical price transparency. There are solutions for transactions, web site, stores, and so on but little to empower the cross-channel retailer to really meet the consumer’s needs.
I’ve recently been looking at IBM’s Smarter Commerce initiative and its portfolio that integrates:
1) Store applications. IBM has well-established high-volume store apps appropriate to high-volume, low-touch retailing but correctly identifies these as inappropriate for fast-growing specialty retail with low-volume “high touch.” This is why it acquired the “asset” of Open Genius.
2) Web metrics. IBM acquired Coremetrics in order to bring the discipline of measuring traffic, conversion, and average order to cross-channel retailing. It’s only by monitoring such metrics that retail can understand which marketing strategies are really successful and which market segments are most receptive.
3) Direct-to-consumer initiatives. IBM acquired Unica as a platform for integrating automated direct-to-consumer marketing with its cross-channel offering.
I just spent some time talking to ScaleMP, an interesting niche player that provides a server virtualization solution. What is interesting about ScaleMP is that rather than splitting a single physical server into multiple VMs, they are the only successful offering (to the best of my knowledge) that allows I&O groups to scale up a collection of smaller servers to work as a larger SMP.
Others have tried and failed to deliver this kind of solution, but ScaleMP seems to have actually succeeded, with a claimed 200 customers and expectations of somewhere between 250 and 300 next year.
Their vSMP product comes in two flavors, one that allows a cluster of machines to look like a single system for purposes of management and maintenance while still running as independent cluster nodes, and one that glues the member systems together to appear as a single monolithic SMP.
Does it work? I haven’t been able to verify their claims with actual customers, but they have been selling for about five years, claim over 200 accounts, with a couple of dozen publicly referenced. All in all, probably too elaborate a front to maintain if there was really nothing there. The background of the principals and the technical details they were willing to share convinced me that they have a deep understanding of the low-level memory management, prefectching, and caching that would be needed to make a collection of systems function effectively as a single system image. Their smaller scale benchmarks displayed good scalability in the range of 4 – 8 systems, well short of their theoretical limits.
My quick take is that the software works, and bears investigation if you have an application that:
Either is certified to run with ScaleMP (not many), or one where that you control the code.
You understand the memory reference patterns of the application, and
Recently, I discussed complexity with a banker working on measuring and managing complexity in a North American bank. His approach is very interesting: He found a way to operationalize complexity measurement and thus to provide concrete data to manage it. While I’m not in a position to disclose any more details, we also talked about the nature of complexity. In absence of any other definition of complexity, I offered a draft definition which I have assembled over time based on a number of “official” definitions. Complexity is the condition of:
Jean-Jacques Rousseau wrote, man is born free and is everywhere in chains. So too Enterprise app deployments are conceived as self contained yet everywhere are integrated with legacy and complementary apps.
My colleague Ken Vollmer and I are looking at packaged apps integration best practices and how these might change as some apps move to the cloud. We are asking:
What kind of middleware do you use?
How do you help process owners to assemble (composite) processes that have transactional integrity?
What do you do about the conflicting data models of apps from different stables – for example yours and those of a third party or perhaps in –house?
How far can so called “canonical” data models and meta data help to overcome such problems?
If you have experience and an opinion about what constitute the top three best practices in such packaged apps integration, or if you can warn about the three most egregious pitfalls to avoid we would love to talk with you.