I am starting to see signs of important changes in technology and IT organizations. The increased complexity of IT and business services forces the industry down a new path. In this context, there are signs reminiscent of what happened to the mainframe vendors in the late 80s and early 90s, when the transition from proprietary to open systems was usually not very successful. In fact, the major players of today (with the exception of IBM) were small potatoes in the 80s, while the major players of that time are either gone or dying. And some vendors today seem to be following the same recipe for eventual disaster.
What’s happening, in the case of a major change of market direction in a company with revenue based on old technology, is what I would call a “sales force failure.” This is the inability of the sales force to get out of its base of usual customers and compete head to head with new vendors in the new market.
Usually these organizations are technically capable of building up-to-date products, but the sales results often don’t meet expectations. Since the new product created internally does not sell, the company management may be tempted to fix the problem (i.e., satisfy the shareholders in the short term) by cutting the cost center, that is the engineering organization making this new product. With R&D gone, the marketing group may license another product to replace the one that it killed. Of course, the margins are not the same, but the cost is almost nonexistent. Eventually, this product does not sell either (the sales force is still in the same condition), and, when the old legacy products are finally dead, the company is no more than a value-added reseller.
One of the great revolutions in manufacturing of the past decades is just-in-time inventory management. The basic idea is to provision only what is needed for a certain level of operation and to put in place a number of management functions that will trigger the provisioning of inventory. This is one the key elements that allowed the manufacturing of goods to contain production costs. We have been trying to adapt the concept to IT for years with little success. But a combination of the latest technologies is finally bringing the concept to a working level. IT operations often faces unpredictable workloads or large variations of workloads during peak periods. Typically, the solution is to over-provision infrastructure capacity and use a number of potential corrective measures: load balancing, traffic shaping, fast reconfiguration and provisioning of servers, etc.
While it may have taken humans thousands of years to progress from oral to written to audio and then to video communications, in the past five years, the Internet has accelerated at a breakneck pace through all of these different communication transmission stages. It started as a way to post and communicate text and still pictures, then moved to digital voice and music, and then took a giant step to video delivery, bringing you news, sports, movies, whenever and wherever you wanted to view them. The Internet is now the prime platform for distributing video content, effectively replacing your video store and your cable or broadcast distribution.
Among critical industrial processes, IT is probably the only one where control and management come as an afterthought. Blame it on product vendors or on immature clients, but it seems that IT management always takes a second seat to application functionalities.
IT operation is seen as a purely tactical activity, but this should not occult the need for a management strategy. Acquiring products on a whim and hastily putting together an ad hoc process to use them is a recipe for chaos. When infrastructure management, which is supposed to bring order and control in IT, leads the way to anarchy, a meltdown is a forgone conclusion.
Most infrastructure management products present a high level of usefulness and innovation. One should be, however, conscious of the vendor’s limitations. Vendors spend a lot of time talking about the mythical customer needs, while most of them have no experience of IT operations. Consequently, their horizon is limited to the technology they have, and that tree does hide the forest. Clients should carefully select products for the role they play in the overall infrastructure management strategy, not solely on the basis of immediate relief. As the world of IT Operations is becoming more complex every day, the value of an IT management product lies not only with its capability to resolve an immediate issue, but also in its ability to participate future management solutions. The tactical and strategic constraints should not be mutually exclusive.
The choice between different formats of cloud computing (IaaS, SaaS mostly) and their comparison to internal IT business service deployment must be based on objective criteria. But this is mostly uncharted territory in IT. Many organizations have difficulties implementing a realistic chargeback solution, and the real cost of business services is often an elusive target. We all agree that IT needs a better form of financial management, even though 80% of organizations will consider it primarily as a means for understanding where to cut costs rather than a strategy to drive a better IT organization.
Financial management will help IT understand better its cost structure in all dimensions, but this is not enough to make an informed choice between a business service internal or external deployment. I think that the problem of which deployment model to choose from requires a new methodology that will get data from financial management. As I often do, I turned to manufacturing to see how they deal with this type of analysis and cost optimization. The starting point is of course an architectural model of the “product”, and this effectively shows how valuable these models are in IT. The two types of analysis, FAST (Function Analysis System Technique) and QFD (Quality Function Deployment), combine into a “Value Analysis Matrix” that lists the customer requirements against the way these requirements are answered by the “product” (or business service) components. Each of these components has a weight (derived from its correlation with the customer requirements) and a cost associated to it. Analyzing several models (for example a SaaS model against an internal deployment) would lead to not only an informed decision but also would open the door to an optimization of the service cost.
I think that such a methodology would complement a financial management product and help IT become more efficient.