Ever since 2009 when NIST published its first definition of cloud computing there has been a promise of community clouds, and now we finally have a second one in the financial services market, thanks to NYSE Technologies. The IT arm of NYSE Euronext announced beta of Capital Markets Community Platform, its cloud computing offering this week, and the effort, on the surface, is a good example for other vertical markets to follow.
For years, financial services firms such as investment banks and hedge funds have been competing on trade execution speed and volume -- where milliseconds per trade can translate into billions of dollars in competitive advantage. And in doing so, they have found that you can't beat the speed of light. Thus if you want very, very fast connections to the stock market, you need to be as close to the servers used by the market as possible. The way to do this prior was to find out where the data center for an exchange was located and put your servers as close as possible and hopefully on the same network backbone. If the exchange was in a colocation facility, then you wanted the cage right next door. This method gave larger investment banks a distinct advantage as you had to be able to afford a full cage and have priority access.
Just kidding, Cisco’s SMARTnet isn’t dead, but I&O managers have a new warranty for networking hardware: free hardware replacement, bug fixes, and tech support. Basically, enterprises can expect to get a basic break-and-fix solution free from most vendors on edge and distribution switches or switch/routers. Hallelujah!
Everyone owes a big thank-you to HP. Over the past 10 years, while holding less than 5% of the market, HP’s ProCurve line forced its competitors’ hands, reset the industry’s warranty choices, and revolutionized what customers should expect from their networking vendors. By leveraging the lifetime warranty to separate themselves from the other seven dwarfs and Gigantor while trying to offset “you get what you pay for,” HP went to market offering next business day replacement on the hardware, phone and email support, along with software bug fixes and updates. They wanted customers to understand that only companies that delivered quality products could sustain this type of service model. HP extended the warranty out to some of the 3Com/H3C products -- after the acquisition -- too.
Within the past two years, most vendors have followed suit and offered their version of a lifetime warranty:
Some of you may have seen the article in the New York Times by John Markoff (endnote1) announcing a paper to be presented at last week’s IEEE conference. This paper is an update to research conducted by a team at the International Computer Science Institute in Berkeley, California. The institute is associated with the University of California, San Diego and the University of California, Berkeley. A paper published by the team in 2008 Spamalytics: An Empirical Analysis of Spam Marketing Conversion outlines interesting research in the area the research team has coined as “spamalytics.”
The paper describes a methodology to understand the architecture of a spam campaign and how a spam message converts into a financial transaction. The team looks at the “conversion rate” or the probability an unsolicited email will create a sale. The team uses a parasitic inﬁltration of an existing botnet infrastructure to analyze two spam campaigns: one designed to propagate a malware Trojan, the other marketing online pharmaceuticals. The team looked at nearly a half billion spam emails to identify:
the number of spam emails successfully delivered
the number of spam emails successfully delivered through popular anti-spam ﬁlters
the number of spam emails that elicit user visits to the advertised sites
Measuring the success of your customer service by using a single metric is impossible. It’s like flying a plane by just looking at your speed without taking the altitude into account. You need to measure a set of competing metrics to make up a Balanced Scorecard that includes the cost of doing business and customer satisfaction. Service operations that have sales responsibilities should also track revenue generated. And in industries with strict policy requirements, like healthcare, insurance, and financial services, compliance with regulations is yet another set of metrics to track.
Choosing the right set of metrics to measure also depends on the stakeholders that use this information. For example:
Service managers need operational data that tracks activities, while executives want strategic KPIs that track outcomes of customer service programs.
Service managers need granular, real-time data on their operations, while executives need to see only a small number of KPIs on a periodic basis.
I always think of it as a two-step process to pinpoint the right metrics for all your stakeholders:
Understand the strategic objectives of your company; choose the high-level KPIs for your contact center that support your company’s objectives. These are the metrics you will report to your executives.
Choose the right operational activity metrics for your contact center that map to these KPIs and which the customer service manager uses on a daily basis to manage operations. Here’s an example of this mapping:
In 2006, Forrester found that organizational structure, internal enterprise goal systems, and most urgent business requirements were key obstacles on many firms’ journey toward broad multichannel solutions with rich cross-channel capabilities. At that time, a few advanced firms tried to establish a multichannel organization, an organizational layer to coordinate multichannel requirements and solutions between the different business groups and the IT organization. Has this changed over the past five years?
Recent Forrester inquiries from enterprise infrastructure and operations (I&O) professionals show that there's still significant confusion between infrastructure-as-a-service (IaaS) private clouds and server virtualization environments. As a result, there are a lot of misperceptions about what it takes to get your private cloud investments right and drive adoption by your developers. The answers may surprise you; they may even be the opposite of what you're thinking.
From speaking with Forrester clients who have deployed successful private clouds, we've found that your cloud should be smaller than you think, priced cheaper than the ROI math would justify and actively marketed internally - no, private clouds are not a Field of Dreams. Our latest report, "Q&A: How to Get Private Cloud Right," details this unconventional thinking, and you may find that internal clouds are much easier than you think.
First and foremost, if you think the way you operate your server virtualization environment today is good enough to call a cloud, you are probably lying to yourself. Per the Forrester definition of cloud computing, your internal cloud must be:
Highly standardized - meaning that the key operational procedures of your internal IaaS environment (provisioning, placement, patching, migration, parking and destroying) should all be documented and conducted the same way every time.
Highly automated - and to make sure the above standardized procedures are done the same time every time, you need to take these tasks out of human error and hand them over to automation software.
Over the years we’ve learned how to address the key business intelligence (BI) challenges of the past 20 years, such as stability, robustness, and rich functionality. Agility and flexibility challenges now represent BI’s next big opportunity. BI pros now realize that earlier-generation BI technologies and architecture, while still useful for more stable BI applications, fall short in the ever-faster race of changing business requirements. Forrester recommends embracing Agile BI methodology, best practices, and technologies (which we’ve covered in previous research) to tackle agility and flexibility opportunities. Alternative database management system (DBMS) engines architected specifically for Agile BI will emerge as one of the compelling Agile BI technologies BI pros should closely evaluate and consider for specific use cases.
Why? Because fitting BI into a row-oriented RDBMS is often like putting a square peg into a round hole. In order to tune such a RDBMS for BI usage, specifically data warehousing, BI pros usually:
Denormalize data models to optimize reporting and analysis.
Build indexes to optimize queries.
Build aggregate tables to optimize summary queries.
Build OLAP cubes to further optimize analytic queries.
What is one of the most important decisions infrastructure & operations (I&O) professionals face today? It's not whether to leverage the cloud or build a private cloud or even which cloud to use. The more important decision is which applications to place in the cloud, and sadly this decision isn't often made objectively. Application development & delivery professionals often decide on their own by bypassing IT. When the decision is made in the open with all parts of IT and the business invited to collaborate, emotion and bravado often rule the day. "SAP's a total pain and a bloated beast, let's move that to the cloud," one CIO said to his staff recently. His belief was if we can do that in the cloud it will prove to the organization that we can move anything to the cloud. Sadly, while a big bang certainly would garner a lot of attention, the likelihood that this transition would be successful is extremely low, and a big bang effort that becomes a big disaster could sour your organization on the cloud and destroy IT's credibility. Instead, organizations should start with low risk applications that let you learn safely how to best leverage the cloud — whether public or private.
In my last blog I asked the question, “What’s it take to be a smart city?” One of the critical elements lies in smart governance. Smart governance takes leadership, coordination, and collaboration. (Take a look at my recent report, "Smart City Leaders Need Better Governance Tools.") Part of this leadership is finding innovative and cost-effective solutions to intractable problems – and that often lies in engaging constituents for input on the problems and feedback on the solutions. As Charles and I were working on another project, we came across a great example of a US state looking outside the box to solve a real and frustrating problem faced by its citizens.
When Cisco began shipping UCS slightly over two years ago, competitor reaction ranged the gamut from concerned to gleefully dismissive of their chances at success in the server market. The reasons given for their guaranteed lack of success were a combination of technical (the product won’t really work), the economics (Cisco can’t live on server margins) to cultural (Cisco doesn’t know servers and can’t succeed in a market where they are not the quasi-monopolistic dominating player). Some ignored them, and some attempted to preemptively introduce products that delivered similar functionality, and in the two years following introduction, competitive reaction was very similar – yes they are selling, but we don’t think they are a significant threat.
Any lingering doubt about whether Cisco can become a credible supplier has been laid to rest with Cisco’s recent quarterly financial disclosures and IDC’s revelation that Cisco is now the No. 3 worldwide blade vendor, with slightly over 10% of worldwide (and close to 20% in North America) blade server shipments. In their quarterly call, Cisco revealed Q1 revenues of $171 million, for a $684 million revenue run rate, and claimed a booking run rate of $900 million annually. In addition, they placed their total customer count at 5,400. While actual customer count is hard to verify, Cisco has been reporting a steady and impressive growth in customers since initial shipment, and Forrester’s anecdotal data confirms both the significant interest and installed UCS systems among Forrester’s clients.