A “bonus” blog today (and hence it is quickly constructed) and the subject area will need to be returned to at a later date. The reason for the bonus blog is that I am a little bit excited.
SysAid, a provider of IT help desk and customer service software solutions, has provided me with a subset of the service desk benchmarking information captured through its customers’ use of its software (on an opt-in basis, of course).
To me, this is the sort of stuff that the ITSM community (see my previous blog) is crying out for – information that helps them to understand where they are and what they should aspire to. More information about the SysAid benchmarking is available at http://www.ilient.com/it-performance-benchmark.htm (link is provided for more detail on the definitions for the benchmarks below).
Average Service Requests (SR) closed per Admin (Service Desk Agent)
Important note: If you follow the above link, the assumptions show that the SRs are “incidents.”
Quick comment – I am assuming that this is per day but I am seeking clarification. As with all the slides in this blog, please treat with care in the absence of sample sizes.
I have been working on a research document, to be published this quarter, on the impact of 8-socket x86 servers based on Intel’s new Xeon 7500 CPU. In a nutshell, these systems have the performance of the best-of-breed RISC/UNIX systems of three years ago, at a substantially better price, and their overall performance improvement trajectory has been steeper than competing technologies for the past decade.
This is probably not shocking news and is not the subject of this current post, although I would encourage you to read it when it is finally published. During the course of researching this document I spent time trying to prove or disprove my thesis that x86 system performance solidly overlapped that of RISC/UNIX with available benchmark results. The process highlighted for me the limitations of using standardized benchmarks for performance comparisons. There are now so many benchmarks available that system vendors are only performing each benchmark on selected subsets of their product lines, if at all. Additionally, most benchmarks suffer from several common flaws:
They are results from high-end configurations, in many cases far beyond the norm for any normal use cases, but results cannot be interpolated to smaller, more realistic configurations.
They are often the result of teams of very smart experts tuning the system configurations, application and system software parameters for optimal results. For a large benchmark such as SAP or TPC, it is probably reasonable to assume that there are over 1,000 variables involved in the tuning effort. This makes the results very much like EPA mileage figures — the consumer is guaranteed not to exceed these numbers.
Yesterday I attended the first day of SuccessFactors’ California customer conference at the Palace Hotel in San Francisco. Efficiency, speed, and good orchestration were evident throughout the day. The CEO, Lars Dalgaard, is a high-energy person who exudes confidence in the growth of his company. He is a real showman, and rather than giving a high-level company overview, his 90-minute presentation focused on product demos with touchscreen projections that worked fairly well. He clearly knows the products, has market momentum, and is driving the company forward. Lars would say, “We are about ‘Execution!’” The SuccessFactors slogan is “Success = Strategy + Execution.” The touted “new” offerings include recruiting (it’s been out for two years); a core HR data management app called Employee Central; calibration; goal execution; and the brand-new offerings through acquisitions -- Inform for workforce planning and analytics, and CubeTree for social collaboration. Acquisitions are new for SuccessFactors, so it hasn’t had experience in bringing together different company cultures and technologies, but my bet is that they’ll be successful.