For the most part, enterprises understand that virtualization and automation are key components of a private cloud, but at what point does a virtualized environment become a private cloud? What can a private cloud offer that a virtualized environment can’t? How do you sell this idea internally? And how do you deliver a true private cloud in 2011?
In London, this March, I am facilitating a meeting of the Forrester Leadership Board Infrastructure & Operations Council, where we will tackle these very questions. If you are considering building a private cloud, there are changes you will need to make in your organization to get it right and our I&O council meeting will give you the opportunity to discuss this with other I&O leaders facing the same challenge.
As I’ve been researching my upcoming report on smart city governance, the topic of integrated customer call centers keeps cropping up. What is 3-1-1, and what does it mean for city governance?
In the US, the telephone number 3-1-1 was reserved by the FCC for non-emergency calls in 2003, and cities and counties across the country have since implemented comprehensive call centers to facilitate the delivery of information and services, as well as encourage feedback from citizens. Access has since extended beyond just the phone to include access through government websites, mobile phones, and even social media tools such as Twitter or applications such as SeeClickFix or Hey Gov.
As a means of background, 3-1-1 services are generally implemented at the local level – primarily at the city or county level – with examples of calls including requests for:
With Cisco's shuttering of Cisco Mail, multitenant cloud email is now (as my colleague Chris Voce called it) a battle royale between Microsoft, Google, and IBM, where the winner will have products, scale, sales channels, and big ecosystems of support.
I am not surprised that Cisco bailed on cloud email. All the signs were there:
The company overpaid for PostPath in the midst of a buying spree. PostPath (which made some folks a lot of money when it sold for $215M) was just one of 17 acquisitions Cisco made in 2007 and 2008. Clearly Cisco was feeling confident that it could buy its way into new markets. (And it did with WebEx.)
Cisco Mail was always to be released "any day now." It's fine to preannounce a product so that buyers know it's coming. But Cisco Mail never quite got shipped. The one reference customer never returned my phone calls.
Cisco's collaboration platform doesn't require email. Messaging is one of the four big boxes of collaboration stuff. (The others are conferencing, workspaces, and social technology.) Messaging in particular can be carved out and offered separately. Cisco doesn't need email. It has WebEx and video conferencing. (The jury's still out on presence, chat, video hosting, and social technology.)
Infrastructure & operations executives have shown a tremendous interest in looking for opportunities to take advantage of the cloud to provision email and collaboration services to their employees – in fact in a recent Forrester survey, nearly half of IT execs report that they either are interested in or plan on making a move to the cloud for email. Why? It can be more cost effective, increase your flexibility, and help control the historical business and technical challenges of deploying these tools yourself.
To date, we’ve talked about four core players in the market : Cisco, Google, IBM, and Microsoft. According to a recent blog post, Cisco has chosen to no longer invest in Cisco Mail. Cisco Mail was formerly known as WebEx Mail – and before that, the email platform was the property of PostPath, which Cisco acquired in 2008 with the intention of providing a more complete collaboration stack alongside its successful WebEx services and voice. I've gathered feedback and worked with my colleagues Ted Schadler, TJ Keitt, and Art Schoeller to synthesize and discuss what this means to Infrastructure & Operations pros and coordinating with their Content & Collaboration colleagues.
So what happened and what does it mean for I&O professionals? Here’s our take:
It’s been a few years since I was a disciple and evangelized for HP ProCurve’s Adaptive EDGE Architecture(AEA). Plain and simple, before the 3Com acquisition, it was HP ProCurve’s networking vision: the architecture philosophy created by John McHugh(once HP ProCurve’s VP/GM, currently the CMO of Brocade), Brice Clark (HP ProCurve Director of Strategy), and Paul Congdon (CTO of HP Networking) during a late-night brainstorming session. The trio conceived that network intelligence was going to move from the traditional enterprise core to the edge and be controlled by centralized policies. Policies based on company strategy and values would come from a policy manager and would be connected by high speed and resilient interconnect much like a carrier backbone (see Figure 1). As soon as users connected to the network, the edge would control them and deliver a customized set of advanced applications and services based on user identity, device, operating system, business needs, location, time, and business policies. This architecture would allow Infrastructure and Operation professionals to create an automated and dynamic platform to address the agility needed by businesses to remain relevant and competitive.
As the HP white paper introducing the EDGE said, “Ultimately, the ProCurve EDGE Architecture will enable highly available meshed networks, a grid of functionally uniform switching devices, to scale out to virtually unlimited dimensions and performance thanks to the distributed decision making of control to the edge.” Sadly, after John McHugh’s departure, HP buried the strategy in lieu of their converged infrastracture slogan: Change.
This week at ISSCC, Intel made its first detailed public disclosures about its upcoming “Poulson” next-generation Itanium CPU. While not in any sense complete, the details they did disclose paint a picture of a competent product that will continue to keep the heat on in the high-end UNIX systems market. Highlights include:
Process — Poulson will be produced in a 32 nm process, skipping the intermediate 45 nm step that many observers expected to see as a step down from the current 65 nm Itanium process. This is a plus for Itanium consumers, since it allows for denser circuits and cheaper chips. With an industry record 3.1 billion transistors, Poulson needs all the help it can get keeping size and power down. The new process also promises major improvements in power efficiency.
Cores and cache — Poulson will have 8 cores and 54 MB of on-chip cache, a huge amount, even for a cache-sensitive architecture like Itanium. Poulson will have a 12-issue pipeline instead of the current 6-issue pipeline, promising to extract more performance from existing code without any recompilation.
Compatibility — Poulson is socket- and pin-compatible with the current Itanium 9300 CPU, which will mean that HP can move more quickly into production shipments when it's available.
You have to admit that knowledge management (KM) is hard — it’s hard to explain, hard to implement, hard to do right. It’s not just technology. It is a combination of organizational realignment, process change, and technology combined in the right recipe that is needed to make KM successful. And when it is successful, it delivers real results — reduced handle times, increased agent productivity and first closure rates, better agent consistency, increased customer satisfaction. Check out the case studies on any of the KM vendors' sites to see real statistics. Yet despite these success stories, and despite there being commercially viable KM solutions on the market for over 10 years, I am unsure whether KM really ever crossed the chasm.
Why is it then that we are seeing renewed interest in KM in 2011? I believe it’s attributed to listening (and acting on) the voice of agents and customers, coupled with loosening the strings of tightly controlled content that has breathed new life into KM. Most common trends include:
Using more flexible authoring workflows. In the past, knowledge was authored by editors who were not on the frontlines of customer service, who foreshadowed questions that they thought customers would ask, and who used language that was not consistent with customer-speak. Authored content would go through a review cycle, finally being published days after it was initially authored. Today, many companies are implementing “just-in-time” authoring where agents fielding questions from customers, not backroom editors, create content that is immediately available in draft form to other agents. Content is then evolved based on usage, and most frequently, used content is published to a customer site, making knowledge leaner and more relevant to real-life situations.
How do you schedule your service desk staff to ensure excellent staffing and achieve service-level targets? Does your service desk solution cover this?
The effective staffing of service desk analysts can be complicated. Leveraging historic volume levels for all of the communication channels is one way to plan ahead. Additionally, having insight into planned projects from other groups — e.g., upgrades of applications or other planned releases — is important as well to plan ahead.
Service desk teams should start automating the workforce management process as much as possible in order to meet the customers’ expectations. Some service desk solutions have the workforce management as part of their functionalities already. If this is a challenge for you today — make sure that you include this key requirement into your functionality assessment list. Use the ITSM Support Tools Product Comparison tool for your assessment.
In the past week I have been briefed by one vendor who has incorporated workforce management into their solution. helpLine 5.1 Workforce Management allows for optimized planning of the service desk team.
The tech industry has generally enjoyed a good reputation with the public and with politicians -- unlike those "bad guys" in banking, or health insurance, or oil and gas. However, analysis that I have done in a just-published report -- Caution: IT Investment May Be Hurting US Job Growth -- suggests that this good reputation could be dented by evidence that business investment in technology could be coming at the expense of hiring.
Some background: In preparing Forrester’s tech market forecasts, I spend a lot of time looking at economic indicators. Employment is not an economic indicator that I usually track, because it has no causal connection that I have been able to find with tech market growth. However, given all the press attention that has been paid to an unemployment rate in excess of 9% and monthly employment increases measured in the tens of thousands instead of hundreds of thousands, it has been hard to ignore the fact that US job growth has been remarkably feeble in this economic recovery.
Like many connected with IBM as an employee, a customer, or an analyst, I watched IBM's Watson beat two smart humans in three games of Jeopardy. However, I was able to do so under more privileged conditions than sitting on my couch. Along with my colleague John Rymer, I attended an IBM event in San Francisco, in which two of the IBM scientists who had developed Watson provided background on Watson prior to, during commercial breaks in, and after the broadcast of the third and final Jeopardy game. We learned a lot about the time, effort, and approaches that went into making Watson competitive in Jeopardy (including, in answer to John's question, that its code base was a combination of Java and C++). This background information made clear how impressive Watson is as a milestone in the development of artificial intelligence. But it also made clear how much work still needs to be done to take the Watson technology and deploy it against the IBM-identified business problems in healthcare, customer service and call centers, or security.
The IBM scientists showed a scattergram of the percentage of Jeopardy questions that winning human contestants got right vs. the percentage of questions that they answered, which showed that these winners generally got 80% or more of the answers right for 60% to 70% of the questions. They then showed line charts of how Watson did against the same variables over time, with Watson well below this zone at the beginning, but then month by month moving higher and higher, until by the time of the contest it was winning over two-thirds of the test contests against past Jeopardy winners. But what I noted was how long the training process took before Watson became competitive -- not to mention the amount of computing and human resources IBM put behind the project.