- log in
Posted by Jean-Pierre Garbani on December 21, 2010
We’re starting to get inquiries about complexity. Key questions are how to evaluate complexity in an IT organization and consequently how to evaluate its impact on availability and performance of applications. Evaluating complexity wouldn’t be like evaluating the maturity of IT processes, which is like fixing what’s broken, but more like preventive maintenance: understanding what’s going to break soon and taking action to prevent the failure.
Volume of application and services certainly has something to do with complexity. Watts Humphrey said that code size (in KLOC: thousands of lines of code) doubles every two years, certainly due to increase in hardware capacity and speed, and this is easily validated by the evolution of operating systems over the past years. It stands to reason that, as a consequence, the total number of errors in the code also doubles every two years.
But code is not the only cause of error: Change, configuration, and capacity are right there, too. Intuitively, the chance of an error in change and configuration would depend on the diversity of infrastructure components and on the volume of changes. Capacity issues would also be dependent on these parameters.
There is also a subjective aspect to complexity: I’m sure that my grandmother would have found an iPhone extremely complex, but my granddaughter finds it extremely simple. There are obviously human, cultural, and organizational factors in evaluating complexity.
Can we define a “complexity index,” should we turn to an evaluation model with all its subjectivity, or is the whole thing a wild goose chase?
One approach that I’m contemplating right now is to measure complexity not directly but through its consequences, like evaluating your foot pressure on the accelerator by measuring the speed. For example, we would use metrics like support budget spending; ratio of support people to servers and applications; size of the code; frequency of change requests; time to resolve a category 1 issue; deployment rate of new services; or time spent on unplanned tasks in I&O. The list needs to be drawn up and checked, but it seems that all these things capture the relative complexity of a given IT environment by measuring its effect. Since complexity is a relative notion, this would not be a measure of “absolute complexity” but a measure that is significant for a specific organization.
Your input and comments on this will be greatly appreciated.
Search Forrester's Blogs
The dynamics that will shape the future in the age of the customer »
Planning for innovation and risk in the wake of Brexit »
Forrester's CX Index
Predict how actions to improve CX will affect revenue performance.
Measure the customer experiences that matter most »
- Amy DeMartine (7)
- Andre Kindness (32)
- Chris Gardner (1)
- Christopher Voce (8)
- Dave Bartoletti (29)
- David Johnson (52)
- Doug Washburn (37)
- Eveline Oehrlich (20)
- Frank Liu (10)
- Glenn O'Donnell (30)
- JP Gownder (109)
- Laura Koetzle (1)
- Lauren Nelson (11)
- Michele Pelino (6)
- Milan Hanson (4)
- Naveen Chhabra (2)
- Richard Fichera (150)
- Robert Stroud (14)
- Sophia Vargas (7)
- Stephanie Balaouras (1)