To keep track of what’s happening to the tech market, I collect quarterly data on the revenues from more than 70 large IT vendors. Accordingly, I spend an unhealthy amount of time looking at their quarterly earnings releases, analyst presentations, and 10-Q and 10-K reports — making me something of a connoisseur of vendor earnings releases, at least from the perspective of revenues and their breakdown by products and geographies.
From that perspective, Microsoft wins the prize for the most opaque earnings release. First, 2003 was the last time it provided its revenues by geography and its revenues from sales to original equipment manufacturers. Since then, there’s been no data or even guidance on its geographic revenues. Second, it does not break out sales to consumers from sales to business and government, although it does report the growth rates in its sales of Office and its other information worker products to consumers or to enterprises. Third, about every year or so, it re-juggles its product line revenues, shifting product revenues into or out of different product lines. While it generally restates the revenues for the prior eight quarters to bring them into line with its new business unit categories, it doesn’t provide guidance or data on prior years, making comparisons with past years very challenging.
I considered ranking other vendors on the transparency of their earnings releases. But I decided it would be more useful to describe the kind of data that I as a technology analyst — and other vendor strategists analyzing the tech market — would like to get from vendor earnings releases.
In the early part of next quarter, I am entering a research phase on a topic I have alluded to many times: techniques for Process Architecture.
One of the key problems that BPM initiatives suffer from is that, even with all the attention, we end up with processes that still have significant issues — they are too inflexible and difficult to change. They become just another version of concrete poured in and around how people work — focusing on control rather than enabling and empowering.
A phrase that I picked up (from a business architect) put it fairly succinctly:
“People tend to work hard to improve what they have, rather than what they need.”
This was then further reinforced by a process architect in government sector on an email:
“The wall I keep hitting is how to think about breaking processes into bite-size chunks that can be automated.”
The problem is that we don’t have good techniques to design (derive) the right operational process architecture from the desired business vision (business capability). Of course, there is an assumption here that there is an effective business vision, but that’s a subject for another line of research.
I am talking about the operational chunks — the pieces of the jigsaw puzzle required to deliver a given outcome. Not how the puzzle pieces are modeled (BPMN, EPC, IDEF, or any other modeling technique), but how to chop up the scope of a business capability to end up with the right operational parts.
I’ve recently had the opportunity to talk with a small sample of SLES 11 and RH 6 Linux users, all developing their own applications. All were long-time Linux users, and two of them, one in travel services and one in financial services, had applications that can be described as both large and mission-critical.
The overall message is encouraging for Linux advocates, both the calm rational type as well as those who approach it with near-religious fervor. The latest releases from SUSE and Red Hat, both based on the 2.6.32 Linux kernel, show significant improvements in scalability and modest improvements in iso-configuration performance. One user reported that an application that previously had maxed out at 24 cores with SLES 10 was now nearing production certification with 48 cores under SLES 11. Performance scalability was reported as “not linear, but worth doing the upgrade.”
Overall memory scalability under Linux is still a question mark, since the widely available x86 platforms do not exceed 3 TB of memory, but initial reports from a user familiar with HP’s DL 980 verify that the new Linux Kernel can reliably manage at least 2TB of RAM under heavy load.
File system options continue to expand as well. The older Linux FS standard, ETX4, which can scale to “only” 16 TB, has been joined by additional options such as XFS (contributed by SGI), which has been implemented in several installations with file systems in excess of 100 TB, relieving a limitation that may have been more psychological than practical for most users.
MyCustomer.com recently asked me what my thoughts were about CRM — why initial CRM projects failed, what has changed to make deployments successful, and what the future holds for CRM. Here is the third and last part of my answers, as well as a link to the published article.
Question: It has long been suggested that ‘CRM’ is becoming increasingly opaque, with some ‘CRM vendors’ sharing few common features. Lithium, for instance, is categorized by Gartner as a ‘Social CRM’ player yet has no sales or marketing functionality at all. Has CRM become too much of a ‘catch-all’ category in your opinion, and what are the dangers of this?
Answer: I think back to the situation that happened a decade ago when the new “e” (electronic) channels became available as customer service channels. There was now customer service, and eService. Fast-forward 10 years. Electronic channels are now just another way of servicing our customers. What matters more is for a company to provide a consistency of experience across the communication channels in order to reinforce and preserve the brand.
I see this happening with social CRM. Social is just another way of selling, marketing, and servicing your customers. The vendors in the CRM landscape will change, with a tremendous amount of consolidation in the vendors landscape. The communication channels will change, but the fundamental value proposition of a CRM system will remain intact.
Question: How do you envisage CRM will continue to evolve as a technology and category?
As 2010 winds down, many business process professionals are finalizing plans to take their BPM initiatives to the next level in 2011. With so many different BPM trends and predictions floating around out there, I’m sure you’re scratching your head wondering which trends to adopt in 2011 and which trends to push off for another year.
My colleague Gene Leganza recently published an excellent report titled "The Top 15 Technology Trends EAs Should Watch". I was pleased to see several BPM-specific trends show up in the report’s “Top 15” list. For the second year in a row, the report highlighted social BPM as one of the top trends to watch. In addition, process data management — the combination of MDM and BPM — was highlighted as another top BPM-related trend.
I recommend reading the entire report, since Gene does an excellent job slicing the survey data to show how we selected and ranked the top 15 trends.
So, as you're finalizing your 2011 BPM plans, here are the hottest trends and capabilities I recommend adding to your road map:
Organizations that use BI show increased (+5.7% from 2009) levels of maturity. However, most of the respondents still rate themselves below average on Forrester's BI maturity scale: 2.75 (on a scale of 1-5) for overall maturity, with the following details:
. . . most aspects of BI such as processes, architectures, and measurements of BI efficiency and effectiveness lag behind.
And (drum roll, please) the most interesting, and, I am sure, controversial finding is that Forrester’s predicted trend that Agile BI and BI self-service will trump centralization and consolidation has been confirmed. Here's the proof:
In 2010, 59% of the respondents said that they do not have a centralized BI competency solutions center, versus . . .
Over the past few months I have had the opportunity to spend some quality time with a number of IT vendors such as HCL, Fujitsu, Oracle, and Dell. This has been some time coming, but over the next few weeks I am taking the opportunity to summarize the overall perceptions I have received from these vendors when evaluating them from a CIO perspective - i.e. as a potential partner for your IT organization and your business. Today I'll tackle HCL, and will move onto the other vendors throughout January. The goal of these blog posts is to give an overall perception of the vendors - something that we don't particularly capture so well in a Wave or vendor analysis where we are focusing on one particular capability of a large vendor. I am trying to capture the "culture" or "style" of the vendor, as this is something that is hard to include in a Forrester Wave, but it IS something that makes a significant difference to the partnership in the longer term.
HCL. A company that is comfortable in its own skin.
That is the way I would summarize HCL. They are a company that know where they have come from and know where they are now, and have a pretty good idea that in five years time they will be nothing like they were or are. They don't know what that future is, but they know they have to put the capabilities in place to ensure the organization can effectively morph into that future form in order to achieve longer term success. Employees First, Customers Second is the first step on this pathway, but it is only that. It will not shape the company that HCL is tomorrow, but it will probably provide the groundwork and internal culture to allow the smoother change.
We’re starting to get inquiries about complexity. Key questions are how to evaluate complexity in an IT organization and consequently how to evaluate its impact on availability and performance of applications. Evaluating complexity wouldn’t be like evaluating the maturity of IT processes, which is like fixing what’s broken, but more like preventive maintenance: understanding what’s going to break soon and taking action to prevent the failure.
Volume of application and services certainly has something to do with complexity. Watts Humphrey said that code size (in KLOC: thousands of lines of code) doubles every two years, certainly due to increase in hardware capacity and speed, and this is easily validated by the evolution of operating systems over the past years. It stands to reason that, as a consequence, the total number of errors in the code also doubles every two years.
But code is not the only cause of error: Change, configuration, and capacity are right there, too. Intuitively, the chance of an error in change and configuration would depend on the diversity of infrastructure components and on the volume of changes. Capacity issues would also be dependent on these parameters.
There is also a subjective aspect to complexity: I’m sure that my grandmother would have found an iPhone extremely complex, but my granddaughter finds it extremely simple. There are obviously human, cultural, and organizational factors in evaluating complexity.
Can we define a “complexity index,” should we turn to an evaluation model with all its subjectivity, or is the whole thing a wild goose chase?
My analyst duties took me to a number of industry and tech-vendor events this fall; in fact, looking back at my calendar, I have been out of my home area in Boston for nine of the last 12 weeks. The upside of all that time in airplane seats is that I get to meet and interact with leaders across the technology industry, including supplier companies, large and small, and their customers and partners.
In the first 10 days of December I spent time with five important technology suppliers, each of which has very different views on the opportunity in the broad arena of IT-for-sustainability (i.e., how information technology products and services help corporations achieve their sustainability goals).
He highlights text analytics technology in the report because understanding unstructured data plays a critical part in daily operations. Enterprises have too much content to review and annotate manually. Text analytics products from vendors like Temis and SAS mine, interpret, and add structure to information to reveal hidden patterns and relationships. In my 2009 overview of text analytics, I cite the primary use cases for these tools: voice of the customer, competitive intelligence, operations improvements, and compliance and law enforcement.
But there are a few other sweet spots for text analytics tools in the enterprise:
Analytics and search: Analytics tools surface and visualize patterns; search tools return discrete results to match an expressed need. But these disciplines are blending. People want to drill in to high-level analysis to find the specific thing customers buzz about. And many searchers don’t know how to articulate their need as a query and are looking for the big picture on a topic or trend. Forrester expects these solutions to come together, as search tools mainstream semantic features like entity extraction out of the box, and analytics vendors introduce new ways to investigate relationships and data output.