Yesterday, SAP announced its intention to acquire business-to-business (B2B) integration provider Crossgate http://www.sap.com/index.epx#/news-reader/?articleID=17515. This was no great surprise, as SAP was already a part-owner and worked closely with the company in product development and marketing and sales activities. SAP will be able to offer a much better ePurchasing solution to customers when it has integrated Crossgate into its business, because supplier connectivity is currently a significant weakness. As I’ve written before (So Where Were The Best Run Businesses Then?), many SRM implementations rely on suppliers manually downloading PO from supplier portals or manually extracting them from emails and rekeying the data into their own systems. Not only does this cost the suppliers lots of money, it creates delays and errors that discourage users from adopting SRM.
SAP doesn’t intend to use Crossgate only for transactional processes; it also wants to develop support for wider collaboration between its customers and their supply chain partners, both upstream and downstream. That’s a sound objective, although not an easy one for SAP to achieve, because its core competence is in rigidly structured internal processes and it hasn’t done a good job to date with unstructured processes, nor with ones that go outside the enterprise’s four walls. Buyers who think they can force suppliers to comply with their edicts, just like employees do, soon end up wondering why no-one is using their ePurchasing solution.
What does the acquisition mean for sourcing professionals who are wondering where Crossgate or its competitors fit into their application strategy? My take:
I have been working on a research document, to be published this quarter, on the impact of 8-socket x86 servers based on Intel’s new Xeon 7500 CPU. In a nutshell, these systems have the performance of the best-of-breed RISC/UNIX systems of three years ago, at a substantially better price, and their overall performance improvement trajectory has been steeper than competing technologies for the past decade.
This is probably not shocking news and is not the subject of this current post, although I would encourage you to read it when it is finally published. During the course of researching this document I spent time trying to prove or disprove my thesis that x86 system performance solidly overlapped that of RISC/UNIX with available benchmark results. The process highlighted for me the limitations of using standardized benchmarks for performance comparisons. There are now so many benchmarks available that system vendors are only performing each benchmark on selected subsets of their product lines, if at all. Additionally, most benchmarks suffer from several common flaws:
They are results from high-end configurations, in many cases far beyond the norm for any normal use cases, but results cannot be interpolated to smaller, more realistic configurations.
They are often the result of teams of very smart experts tuning the system configurations, application and system software parameters for optimal results. For a large benchmark such as SAP or TPC, it is probably reasonable to assume that there are over 1,000 variables involved in the tuning effort. This makes the results very much like EPA mileage figures — the consumer is guaranteed not to exceed these numbers.