If You Don’t Manage Everything, You Don’t Manage Anything

I’m always surprised to see that the Citroen 2CV (CV: Cheval Vapeur, hence the name Deux Chevaux) has such a strong following, even in the United States. Granted, this car was the epitome of efficiency: It used minimum gas (60 miles to the gallon), was eminently practical, and its interior could be cleaned with a garden hose. Because the car was minimalist to the extreme, the gas gauge on the early models was a dipstick of some material, with marks to show how many liters of gas were left in the tank. For someone like me, who constantly forgot to consult the dipstick before leaving home, it meant that I would be out of gas somewhere far from a station almost every month. A great means of transportation failed regularly for lack of instrumentation. (Later models had a gas gauge.)

 This shows how failure to monitor one element leads to the failure of the complete system — and that if you don’t manage everything you don’t manage anything, since the next important issue can develop in blissful ignorance.

The point is that we often approach application performance management from the same angle that Citroen used to create the 2CV: provide only the most critical element monitoring in the name of cost-cutting. This has proved time and again to be fraught with risk and danger. Complex, multitier applications are composed of a myriad of components, hardware and software, that can fail.

In application performance management, I see a number of IT operations focus their tools on some critical elements and ignore others. But even though many of the critical hardware and software components have become extremely reliable, it doesn’t mean that they are impervious to failure: There is simply no way to guarantee the life of a specific electronic component.

Read more

Is Infrastructure & Operations Vulnerable To Job Market Trends?

A couple of weeks ago, I read that one of the largest US car makers was trying to buy out several thousand machinists and welders. While we have grown accustomed to bad news in this economy, what I found significant was that these were skilled workers. Personally, I find it a lot easier to write code than to weld two pieces of steel together, and I have tried both.

For the past 20 years, the job market in industrialized countries has shown a demand increase at the high and low ends of the wage and skill scale, to the detriment of the middle. Although it’s something that we may have intuitively perceived in our day-to-day lives, a 2010 paper by David Autor of MIT confirms the trend:

“. . . the structure of job opportunities in the United States has sharply polarized over the past two decades, with expanding job opportunities in both high-skill, high-wage occupations and low-skill, low-wage occupations, coupled with contracting opportunities in middle-wage, middle-skill white-collar and blue-collar jobs.”

One of the reasons for this bipolarization of the job market is that most of the tasks in the middle market are based on well-known and well-documented procedures that can be easily automated by software (or simply offshored). This leaves, at the high end, jobs that require analytical and decision-making skills usually based on a solid education, and at the low end, “situational adaptability, visual and language recognition, and in-person interactions. . . . and little in the way of formal education.”

Can this happen to IT? As we are fast-forwarding to an industrial IT, we tend to replicate what other industries did before us, that is remove the person in the middle through automation and thus polarize the skill and wage opportunities at both ends of the scale.

Read more

BSM Rediscovered

I have in the past lamented the evolution of BSM into more of an ITIL support solution than the pure IT management project that we embarked on seven years ago. In the early years of BSM, we all were convinced of the importance of application dependency discovery: It was the bridge between the user, who sees an application, and IT, which sees infrastructures. We were all convinced that discovery tools should be embedded in infrastructure management solutions to improve them. I remember conversations with all big four product managers, and we all agreed at the time that the “repository” of dependencies, later to become CMDB, was not a standalone solution. How little we knew!

 What actually happened was the discovery tools showed a lot of limitations and the imperfect CMDB that was the result became the center of ITIL v2 universe. The two essential components that we saw in BSM for improving the breed of system management tools were quietly forgotten. These two major failures are: 1) real-time dependency discovery — because last month’s application dependencies are as good as yesterday’s newspaper when it comes to root cause analysis or change detection, and 2) the reworking of tools around these dependencies — because it added a level of visibility and intelligence that was sorely lacking in the then current batch of monitoring and management solutions. But there is hope on the IT operations horizon.

These past few days, I have been briefed by two new companies that are actually going back to the roots of BSM.

Neebula has introduced a real-time discovery solution that continuously updates itself and is embedded into an intelligent event and impact analysis monitor. It also discovers applications in the cloud.

Read more