As a lifelong Yankees fan (which makes me a pariah with many of my Red Sox Country-based Forrester coworkers in Cambridge, Mass.), I’ve been following with amusement the sports media frenzy around the New York Yankees' "not-so-public yet not-so-private" contract negotiations with their star shortstop, Derek Jeter. While I read these news snippets with the intent of escaping the exciting world of data management for just a brief moment, I couldn’t escape for long because both sides of the table bring up reams of data to defend their positions.
According to the media reports and analysis, the Yankees' ownership is seemingly paying less attention to Jeter’s Hall of Fame-worthy career statistics, including a fantastic 2009 season, and his intrinsic value to the Yankees brand, but instead is focusing on Jeter’s arguably career-low 2010 season on-field performance and advancing age (36 years old is practically Medicare-eligible age in baseball).
Jeter’s side of the negotiations, on the other hand, point out that Jeter’s value to the Yankees is “immeasurable,” and that one off year shouldn’t be used to define his value to the team. They point out that Jeter, as team captain, is a major leader in the clubhouse and excellent role model for younger players. He’s certainly among the most popular players the Yankees employ and influences boatloads of fans to attend games, watch the Yankees cable network, and provide significant licensing revenue. And of course they point out that Jeter is still an excellent player and 2010 should be viewed as an anomaly, not the norm.
I’m not a baseball analyst (lucky for everyone), and I have no intention in joining the debate on whose point of view is correct or how much Jeter should earn, for how many years, etc. (That’s best discussed over a few beers, not on a blog, right?)
I have been working on a research document, to be published this quarter, on the impact of 8-socket x86 servers based on Intel’s new Xeon 7500 CPU. In a nutshell, these systems have the performance of the best-of-breed RISC/UNIX systems of three years ago, at a substantially better price, and their overall performance improvement trajectory has been steeper than competing technologies for the past decade.
This is probably not shocking news and is not the subject of this current post, although I would encourage you to read it when it is finally published. During the course of researching this document I spent time trying to prove or disprove my thesis that x86 system performance solidly overlapped that of RISC/UNIX with available benchmark results. The process highlighted for me the limitations of using standardized benchmarks for performance comparisons. There are now so many benchmarks available that system vendors are only performing each benchmark on selected subsets of their product lines, if at all. Additionally, most benchmarks suffer from several common flaws:
They are results from high-end configurations, in many cases far beyond the norm for any normal use cases, but results cannot be interpolated to smaller, more realistic configurations.
They are often the result of teams of very smart experts tuning the system configurations, application and system software parameters for optimal results. For a large benchmark such as SAP or TPC, it is probably reasonable to assume that there are over 1,000 variables involved in the tuning effort. This makes the results very much like EPA mileage figures — the consumer is guaranteed not to exceed these numbers.