CardSpace Is Dead. Long Live Back-Channel Access.

Eve Maler

Microsoft announced during last week's RSA conference that it would not be shipping Windows CardSpace 2.0. A lot of design imperatives weighed on that one deliverable: security, privacy, usability, bridging the enterprise and consumer identity worlds – and being the standard-bearer of the "identity metasystem" and the "laws of identity" to boot.  Something had to give. What are the implications for security and risk professionals?

The CardSpace model had nice phishing resistance properties that cloud-based identity selectors will find hard to replicate, alas. But without wide adoption on the open Web, that wasn't going to make a dent anyway. We'll have to look for other native-app solutions over time for that.

Read more

Cisco Sends A Recall On Its Cloud Email Strategy

Christopher Voce

Infrastructure & operations executives have shown a tremendous interest in looking for opportunities to take advantage of the cloud to provision email and collaboration services to their employees – in fact in a recent Forrester survey, nearly half of IT execs report that they either are interested in or plan on making a move to the cloud for email. Why? It can be more cost effective, increase your flexibility, and help control the historical business and technical challenges of deploying these tools yourself.  

To date, we’ve talked about four core players in the market : Cisco, Google, IBM, and Microsoft. According to a recent blog post, Cisco has chosen to no longer invest in Cisco Mail. Cisco Mail was formerly known as WebEx Mail – and before that, the email platform was the property of PostPath, which Cisco acquired in 2008 with the intention of providing a more complete collaboration stack alongside its successful WebEx services and voice.  I've gathered feedback and worked with my colleagues Ted Schadler, TJ Keitt, and Art Schoeller to synthesize and discuss what this means to Infrastructure & Operations pros and coordinating with their Content & Collaboration colleagues.

 So what happened and what does it mean for I&O professionals? Here’s our take:

Read more

Citrix Acquires EMS-Cortex

John Rakowski

Another year and Citrix’s acquisition strategy of interesting companies continues as they have announced the purchase of EMS-Cortex. This acquisition has caught my eye because EMS-Cortex provides a web-based “cloud control panel” that can be used by service providers and end users to manage the provisioning and delegation administration of hosted business applications in a cloud environment such as XenApp, Microsoft Exchange, BlackBerry Enterprise Server, and a number of other critical business applications. In theory this means that customers and vendors will be able to “spin up” core business services quickly in a multi tenant environment.  

It is an interesting acquisition, as vendors are starting to address the fact that for “cloudonomics” to be achieved by their customers it is important that they ease the route to cloud adoption. While this acquisition is potentially a good move for Citrix I think it will be interesting for I&O professionals to see how they plan to integrate this ease of deployment with existing business service management processes, especially if the EMS-Cortex solution is going to be used in a live production environment.

Read more

Juniper’s QFabric: The Dark Horse In The Datacenter Fabric Race?

Andre Kindness

It’s been a few years since I was a disciple and evangelized for HP ProCurve’s Adaptive EDGE Architecture(AEA). Plain and simple, before the 3Com acquisition, it was HP ProCurve’s networking vision: the architecture philosophy created by John McHugh(once HP ProCurve’s VP/GM, currently the CMO of Brocade), Brice Clark (HP ProCurve Director of Strategy), and Paul Congdon (CTO of HP Networking) during a late-night brainstorming session. The trio conceived that network intelligence was going to move from the traditional enterprise core to the edge and be controlled by centralized policies. Policies based on company strategy and values would come from a policy manager and would be connected by high speed and resilient interconnect much like a carrier backbone (see Figure 1). As soon as users connected to the network, the edge would control them and deliver a customized set of advanced applications and services based on user identity, device, operating system, business needs, location, time, and business policies. This architecture would allow Infrastructure and Operation professionals to create an automated and dynamic platform to address the agility needed by businesses to remain relevant and competitive.

As the HP white paper introducing the EDGE said, “Ultimately, the ProCurve EDGE Architecture will enable highly available meshed networks, a grid of functionally uniform switching devices, to scale out to virtually unlimited dimensions and performance thanks to the distributed decision making of control to the edge.” Sadly, after John McHugh’s departure, HP buried the strategy in lieu of their converged infrastracture slogan: Change.

Read more

Intel Discloses Details on “Poulson,” Next-Generation Itanium

Richard Fichera

This week at ISSCC, Intel made its first detailed public disclosures about its upcoming “Poulson” next-generation Itanium CPU. While not in any sense complete, the details they did disclose paint a picture of a competent product that will continue to keep the heat on in the high-end UNIX systems market. Highlights include:

  • Process — Poulson will be produced in a 32 nm process, skipping the intermediate 45 nm step that many observers expected to see as a step down from the current 65 nm Itanium process. This is a plus for Itanium consumers, since it allows for denser circuits and cheaper chips. With an industry record 3.1 billion transistors, Poulson needs all the help it can get keeping size and power down. The new process also promises major improvements in power efficiency.
  • Cores and cache — Poulson will have 8 cores and 54 MB of on-chip cache, a huge amount, even for a cache-sensitive architecture like Itanium. Poulson will have a 12-issue pipeline instead of the current 6-issue pipeline, promising to extract more performance from existing code without any recompilation.
  • Compatibility — Poulson is socket- and pin-compatible with the current Itanium 9300 CPU, which will mean that HP can move more quickly into production shipments when it's available.
Read more

Social Breathes New Life Into Knowledge Management For Customer Service

Kate Leggett

You have to admit that knowledge management (KM) is hard — it’s hard to explain, hard to implement, hard to do right. It’s not just technology. It is a combination of organizational realignment, process change, and technology combined in the right recipe that is needed to make KM successful. And when it is successful, it delivers real results — reduced handle times, increased agent productivity and first closure rates, better agent consistency, increased customer satisfaction. Check out the case studies on any of the KM vendors' sites to see real statistics. Yet despite these success stories, and despite there being commercially viable KM solutions on the market for over 10 years, I am unsure whether KM really ever crossed the chasm.  

Why is it then that we are seeing renewed interest in KM in 2011? I believe it’s attributed to listening (and acting on) the voice of agents and customers, coupled with loosening the strings of tightly controlled content that has breathed new life into KM. Most common trends include:

  • Using more flexible authoring workflows. In the past, knowledge was authored by editors who were not on the frontlines of customer service, who foreshadowed questions that they thought customers would ask, and who used language that was not consistent with customer-speak. Authored content would go through a review cycle, finally being published days after it was initially authored. Today, many companies are implementing “just-in-time” authoring where agents fielding questions from customers, not backroom editors, create content that is immediately available in draft form to other agents. Content is then evolved based on usage, and most frequently, used content is published to a customer site, making knowledge leaner and more relevant to real-life situations.
Read more

Staffing Your Service Desk Analysts

Eveline Oehrlich

 

Question:

How do you schedule your service desk staff to ensure excellent staffing and achieve service-level targets? Does your service desk solution cover this?

Answer:

The effective staffing of service desk analysts can be complicated. Leveraging historic volume levels for all of the communication channels is one way to plan ahead. Additionally, having insight into planned projects from other groups — e.g., upgrades of applications or other planned releases — is important as well to plan ahead. 

Service desk teams should start automating the workforce management process as much as possible in order to meet the customers’ expectations. Some service desk solutions have the workforce management as part of their functionalities already. If this is a challenge for you today — make sure that you include this key requirement into your functionality assessment list. Use the ITSM Support Tools Product Comparison tool for your assessment. 

In the past week I have been briefed by one vendor who has incorporated workforce management into their solution. helpLine 5.1 Workforce Management allows for optimized planning of the service desk team.

Categories:

IT Investment May Be Hurting US Job Growth

Andrew Bartels

The tech industry has generally enjoyed a good reputation with the public and with politicians -- unlike those "bad guys" in banking, or health insurance, or oil and gas.  However, analysis that I have done in a just-published report -- Caution: IT Investment May Be Hurting US Job Growth -- suggests that this good reputation could be dented by evidence that business investment in technology could be coming at the expense of hiring. 

Some background: In preparing Forrester’s tech market forecasts, I spend a lot of time looking at economic indicators.  Employment is not an economic indicator that I usually track, because it has no causal connection that I have been able to find with tech market growth.  However, given all the press attention that has been paid to an unemployment rate in excess of 9% and monthly employment increases measured in the tens of thousands instead of hundreds of thousands, it has been hard to ignore the fact that US job growth has been remarkably feeble in this economic recovery. 

Read more

IBM's Watson And Its Implications For Smart Computing

Andrew Bartels

Like many connected with IBM as an employee, a customer, or an analyst, I watched IBM's Watson beat two smart humans in three games of Jeopardy.  However, I was able to do so under more privileged conditions than sitting on my couch.  Along with my colleague John Rymer, I attended an IBM event in San Francisco, in which two of the IBM scientists who had developed Watson provided background on Watson prior to, during commercial breaks in, and after the broadcast of the third and final Jeopardy game.  We learned a lot about the time, effort, and approaches that went into making Watson competitive in Jeopardy (including, in answer to John's question, that its code base was a combination of Java and C++).  This background information made clear how impressive Watson is as a milestone in the development of artificial intelligence.  But it also made clear how much work still needs to be done to take the Watson technology and deploy it against the IBM-identified business problems in healthcare, customer service and call centers, or security.

The IBM scientists showed a scattergram of the percentage of Jeopardy questions that winning human contestants got right vs. the percentage of questions that they answered, which showed that these winners generally got 80% or more of the answers right for 60% to 70% of the questions.  They then showed line charts of how Watson did against the same variables over time, with Watson well below this zone at the beginning, but then month by month moving higher and higher, until by the time of the contest it was winning over two-thirds of the test contests against past Jeopardy winners.  But what I noted was how long the training process took before Watson became competitive -- not to mention the amount of computing and human resources IBM put behind the project.

Read more

Don’t Underestimate The Value Of Information, Documentation, And Expertise!

Andre Kindness

With all the articles written about IPv4 addresses running out, Forrester’s phone lines are lit up like a Christmas tree. Clients are asking what they should do, who they should engage, and when they should start embracing IPv6. Like the old adage “It takes a village to raise a child,” Forrester is only one component; therefore, I started to compile a list of vendors and tactical documentation links that would help customers transition to IPv6. As I combed through multiple sites, the knowledge and documentation chasm between vendors became apparent. If the vendor doesn’t understand your business goals or have the knowledge to solve your business issues, are they a good partner? Are acquisition and warranty costs the only or largest considerations to making a change to a new vendor? I would say no.

Support documentation and availability to knowledge is especially critical in networking design, deployment, maintenance, and upgrades. Some pundits have relegated networking to a commodity play, but networking is more than plumbing. It’s the fabric that supports a dynamic business connecting users to services that are relevant to the moment, are aggregated at the point of use, and originate from multiple locations. The complexity has evolved from designing in a few links to tens of hundreds of relationships (security, acceleration, prioritization, etc.) along the flow of apps and data through a network. Virtualization, convergence, consolidation, and the evolving data center networks are prime examples of today’s network complexity. In response to this complexity, architects and practitioners turn to books, training materials, blogs, and repositories so that they can:

  • Set up an infrastructure more quickly or with a minimal number of issues, since there is a design guide or blueprint.
Read more