Very much in the shadows of all the press coverage and hysteria attendant on emerging cloud architectures and customer-facing systems of engagement are the nitty-gritty operational details that lurk like monsters in the swamp of legacy infrastructure, and some of them have teeth. And sometimes these teeth can really take a bite out of the posterior of an unprepared organization.
One of those toothy animals that I&O groups are increasingly encountering in their landscapes is the problem of what to do with Windows Server 2003 (WS2003). It turns out there are still approximately 11 million WS2003 systems running today, with another 10+ million instances running as VM guests. Overall, possibly more than 22 million OS images and a ton of hardware that will need replacing and upgrading. And increasing numbers of organizations have finally begun to take seriously the fact that Microsoft is really going to end support and updates as of July 2015.
Based on the conversations I have been having with our clients, the typical I&O group that is now scrambling to come up with a plan has not been willfully negligent, nor are they stupid. Usually WS2003 servers are legacy servers, quietly running some mature piece of code, often in satellite locations or in the shops of acquired companies. The workloads are a mix of ISV and bespoke code, but it is often a LOB-specific application, with the run-of-the-mill collaboration, infrastructure servers and, etc. having long since migrated to newer platforms. A surprising number of clients have told me that they have identified the servers, but not always the applications or the business owners – often a complex task for an old resource in a large company.
I’ve noticed a growing trend among Asia Pacific organizations over the past 6-12 months: complete IT resistance to SaaS has steadily given way to more pragmatic discussions, even if IT has come to the table grudgingly. Over the next two years I expect this trend to accelerate. In fact, I believe that many SaaS solutions, particularly those that cross business and functional boundaries, will be rapidly subsumed within the broader IT portfolio, even if they were originally sourced outside IT.
Many SaaS vendors report already seeing more IT involvement in procurement, requirements definition, RFP creation, and negotiations. The clear procurement guidelines published by the IT department of the Australian Government Information Management Office (AGIMO) is one high profile example. Don’t get me wrong, in most instances business decision-makers will still lead, particularly in identifying the required business processes and determining how best to consume SaaS-based services. But IT decision-makers are getting more involved, particularly around integration.
Some areas to consider as you look to work more closely with business decision-makers to evaluate and negotiate SaaS and other public cloud deals:
Retail is experiencing substantial change because consumers are now empowered by the web with information about price, availability, and merchandise features.
The retail industry is still served by solutions that are too fragmented to adequately balance the asymmetry introduced by radical price transparency. There are solutions for transactions, web site, stores, and so on but little to empower the cross-channel retailer to really meet the consumer’s needs.
I’ve recently been looking at IBM’s Smarter Commerce initiative and its portfolio that integrates:
1) Store applications. IBM has well-established high-volume store apps appropriate to high-volume, low-touch retailing but correctly identifies these as inappropriate for fast-growing specialty retail with low-volume “high touch.” This is why it acquired the “asset” of Open Genius.
2) Web metrics. IBM acquired Coremetrics in order to bring the discipline of measuring traffic, conversion, and average order to cross-channel retailing. It’s only by monitoring such metrics that retail can understand which marketing strategies are really successful and which market segments are most receptive.
3) Direct-to-consumer initiatives. IBM acquired Unica as a platform for integrating automated direct-to-consumer marketing with its cross-channel offering.
I just spent some time talking to ScaleMP, an interesting niche player that provides a server virtualization solution. What is interesting about ScaleMP is that rather than splitting a single physical server into multiple VMs, they are the only successful offering (to the best of my knowledge) that allows I&O groups to scale up a collection of smaller servers to work as a larger SMP.
Others have tried and failed to deliver this kind of solution, but ScaleMP seems to have actually succeeded, with a claimed 200 customers and expectations of somewhere between 250 and 300 next year.
Their vSMP product comes in two flavors, one that allows a cluster of machines to look like a single system for purposes of management and maintenance while still running as independent cluster nodes, and one that glues the member systems together to appear as a single monolithic SMP.
Does it work? I haven’t been able to verify their claims with actual customers, but they have been selling for about five years, claim over 200 accounts, with a couple of dozen publicly referenced. All in all, probably too elaborate a front to maintain if there was really nothing there. The background of the principals and the technical details they were willing to share convinced me that they have a deep understanding of the low-level memory management, prefectching, and caching that would be needed to make a collection of systems function effectively as a single system image. Their smaller scale benchmarks displayed good scalability in the range of 4 – 8 systems, well short of their theoretical limits.
My quick take is that the software works, and bears investigation if you have an application that:
Either is certified to run with ScaleMP (not many), or one where that you control the code.
You understand the memory reference patterns of the application, and
What is the definition of an "application"? We are "applications development and delivery professionals" - surely we have this question nailed, don't we? The question keeps coming up in different contexts, and since there are many potential opinions, a blog is the perfect place to spur debate. Here are some (simplistic) questions to generate debate:
Is a Web page an application?
If not, how many Web pages does it take until I consider it an application - 10, 100, 1,000?
Does size matter? (Please behave yourselves with this one.)
Is the size of the code base a pertinent factor?
What about SharePoint sites, Access databases, and spreadsheets? Are they applications?
Where do COTS and packaged apps fit?
Does the technology I use affect the definition?
If I use a scripting language for a quick-and-dirty task, is that an application?
Does SOA erode the definition of an application?
Do we cease thinking about applications as entities and think about them more as containers that hold collections of SOA services?
How does open source affect the definition?
How does my role affect my perception of an application?
Do developers and users use similar definitions?
I have my opinions - in fact I just finished a draft piece of research on it that will be published in January, but what are your opinions?
A quick note on a big announcement today by IBM that is being rolled out as I write this. No, I don't have a crystal ball - my colleague Brad Day and I spent a day in Poughkeepsie in late June for the full scoop - provided under NDA. The announcement is massive, so I'll just lay out the high points and a few of my thoughts on what it means to apps folks. I'll leave the deeper I&O/technical details to Brad and others in subsequent posts and research. My goal here is to get a conversation going here on what it may mean to apps people in your IT shops.
What's in the zEnterprise announcement?
It's a new computing environment that unifies Linux, AIX, and z/OS on a new server complex that includes mainframe servers, x86, and Power7 blades under a single set of management software: the zEnterprise Unified Resource Manager (URM).
A 10 Gb private data network joins the new z server (z196) and zBX - an ensemble that houses racks of x86 and Power7 blades. It also includes an intra-ensemble network that is physically isolated from all networks, switches, and routers - permitting removal of blade firewalls.
One client claims a 12-to-1 reduction in network hops by eliminating blade firewalls.
The z196 permits up to 96 Quad-core 5.02 ghz processors, 80 available for customer use, and 112 blades.
What is the impact on applications people and application-platform choice?
zEnterprise is a monster announcement that heralds a long laundry list of improvements - it would be impossible to cover all of the ramifications in a single blog post; however, a brief glimpse of some of the most notable improvements that affect applications folks include (zEnterprise as compared to z10):
Applications development people can't stand the Luddites in the operations group, and ops people hate those prima donas in apps dev - at least that's what we are led to believe. To explore the issue, two of my colleagues who write to the infrastructure and operations (I&O) role - Glenn O'Donnell and Evelyn (Hubbert) Oehrlich - invited me to participate in an experiment of sorts. They arranged a joint session for the I&O Forrester Leadership Board (FLB) meeting, and I was the sole applications guy in the room - a conduit for I&O FLB members to vent their frustration at their apps dev peers. For those who aren't aware, FLBs are communities of like-minded folks in the same role who meet several times a year to network, share their experiences, guide research, and address the issues that affect their role.
We infused the session with equal parts education, calls for joint strategic planning across all IT work, and a bit of stand-up comedy - Glenn noted that as representatives of our respective roles, he and I were actually twin sons of different mothers. I noted that in that context that our parents must have been really ugly. Once we opened the session for discussion, the good folks in the room wasted no time in launching verbal stones my way. Now, I'm no IT neophyte: I've been in the industry since 1982, and I'm no stranger to conflict - I grew up with 3 older brothers, and we all exchanged our fair share of abuse as siblings will. Still, I wasn't quite prepared for the venting that followed. To summarize a few of the main points, I&O sees apps folks as:
So you need to formulate an application modernization decision -- what to do with a given application -- how do you begin that decision making process? In the past, modernization decisions were often simply declared -- "We are moving to this technology" -- for a number of reasons, such as, it:
Keeps us current on technology.
Provides a more acceptable user-interface or integration capability.
Increases our exposure to access by external customers.
Increases the volume of business transaction we can process.
Trades custom/bespoke applications for standardized application packages such as ERP, payroll, human resources, etc.
Fast-forward to today -- you could simply go with your gut -- declare a solution based on what you currently know (or think you know) about the application in question. But it's a new day baby -- a proposal like that, without proper justification, is likely to be met with one of two responses from management:
Recently, I discussed complexity with a banker working on measuring and managing complexity in a North American bank. His approach is very interesting: He found a way to operationalize complexity measurement and thus to provide concrete data to manage it. While I’m not in a position to disclose any more details, we also talked about the nature of complexity. In absence of any other definition of complexity, I offered a draft definition which I have assembled over time based on a number of “official” definitions. Complexity is the condition of: