I’m always surprised to see that the Citroen 2CV (CV: Cheval Vapeur, hence the name Deux Chevaux) has such a strong following, even in the United States. Granted, this car was the epitome of efficiency: It used minimum gas (60 miles to the gallon), was eminently practical, and its interior could be cleaned with a garden hose. Because the car was minimalist to the extreme, the gas gauge on the early models was a dipstick of some material, with marks to show how many liters of gas were left in the tank. For someone like me, who constantly forgot to consult the dipstick before leaving home, it meant that I would be out of gas somewhere far from a station almost every month. A great means of transportation failed regularly for lack of instrumentation. (Later models had a gas gauge.)
This shows how failure to monitor one element leads to the failure of the complete system — and that if you don’t manage everything you don’t manage anything, since the next important issue can develop in blissful ignorance.
The point is that we often approach application performance management from the same angle that Citroen used to create the 2CV: provide only the most critical element monitoring in the name of cost-cutting. This has proved time and again to be fraught with risk and danger. Complex, multitier applications are composed of a myriad of components, hardware and software, that can fail.
In application performance management, I see a number of IT operations focus their tools on some critical elements and ignore others. But even though many of the critical hardware and software components have become extremely reliable, it doesn’t mean that they are impervious to failure: There is simply no way to guarantee the life of a specific electronic component.
For the most part, enterprises understand that virtualization and automation are key components of a private cloud, but at what point does a virtualized environment become a private cloud? What can a private cloud offer that a virtualized environment can’t? How do you sell this idea internally? And how do you deliver a true private cloud in 2011?
In London, this March, I am facilitating a meeting of the Forrester Leadership Board Infrastructure & Operations Council, where we will tackle these very questions. If you are considering building a private cloud, there are changes you will need to make in your organization to get it right and our I&O council meeting will give you the opportunity to discuss this with other I&O leaders facing the same challenge.
Infrastructure & operations executives have shown a tremendous interest in looking for opportunities to take advantage of the cloud to provision email and collaboration services to their employees – in fact in a recent Forrester survey, nearly half of IT execs report that they either are interested in or plan on making a move to the cloud for email. Why? It can be more cost effective, increase your flexibility, and help control the historical business and technical challenges of deploying these tools yourself.
To date, we’ve talked about four core players in the market : Cisco, Google, IBM, and Microsoft. According to a recent blog post, Cisco has chosen to no longer invest in Cisco Mail. Cisco Mail was formerly known as WebEx Mail – and before that, the email platform was the property of PostPath, which Cisco acquired in 2008 with the intention of providing a more complete collaboration stack alongside its successful WebEx services and voice. I've gathered feedback and worked with my colleagues Ted Schadler, TJ Keitt, and Art Schoeller to synthesize and discuss what this means to Infrastructure & Operations pros and coordinating with their Content & Collaboration colleagues.
So what happened and what does it mean for I&O professionals? Here’s our take:
Another year and Citrix’s acquisition strategy of interesting companies continues as they have announced the purchase of EMS-Cortex. This acquisition has caught my eye because EMS-Cortex provides a web-based “cloud control panel” that can be used by service providers and end users to manage the provisioning and delegation administration of hosted business applications in a cloud environment such as XenApp, Microsoft Exchange, BlackBerry Enterprise Server, and a number of other critical business applications. In theory this means that customers and vendors will be able to “spin up” core business services quickly in a multi tenant environment.
It is an interesting acquisition, as vendors are starting to address the fact that for “cloudonomics” to be achieved by their customers it is important that they ease the route to cloud adoption. While this acquisition is potentially a good move for Citrix I think it will be interesting for I&O professionals to see how they plan to integrate this ease of deployment with existing business service management processes, especially if the EMS-Cortex solution is going to be used in a live production environment.
It’s been a few years since I was a disciple and evangelized for HP ProCurve’s Adaptive EDGE Architecture(AEA). Plain and simple, before the 3Com acquisition, it was HP ProCurve’s networking vision: the architecture philosophy created by John McHugh(once HP ProCurve’s VP/GM, currently the CMO of Brocade), Brice Clark (HP ProCurve Director of Strategy), and Paul Congdon (CTO of HP Networking) during a late-night brainstorming session. The trio conceived that network intelligence was going to move from the traditional enterprise core to the edge and be controlled by centralized policies. Policies based on company strategy and values would come from a policy manager and would be connected by high speed and resilient interconnect much like a carrier backbone (see Figure 1). As soon as users connected to the network, the edge would control them and deliver a customized set of advanced applications and services based on user identity, device, operating system, business needs, location, time, and business policies. This architecture would allow Infrastructure and Operation professionals to create an automated and dynamic platform to address the agility needed by businesses to remain relevant and competitive.
As the HP white paper introducing the EDGE said, “Ultimately, the ProCurve EDGE Architecture will enable highly available meshed networks, a grid of functionally uniform switching devices, to scale out to virtually unlimited dimensions and performance thanks to the distributed decision making of control to the edge.” Sadly, after John McHugh’s departure, HP buried the strategy in lieu of their converged infrastracture slogan: Change.
This past month or so, I’ve been working with a number of Forrester clients who are either coming up on end of life storage hardware or are adding more capacity to their existing environment. In either case, the question always starts with “Who should we be using?” This situation comes up frequently, and I felt the need to point out some changes happening in organizations’ IT environments, and why this should be one of the last questions to ask.
Virtualization continues to move forward in most organizations. Although most environments are only 30% to 40% virtualized, there is an aggressive initiative to virtualize as much as possible. In Forrester surveys, virtualization was one of the top three initiatives for 2010, and I have no doubt it will be for 2011 as well. This means there is a great deal of responsibility (and budget) on the virtualization administrator to make this happen.
Teamsare being assembled to think and design for a private cloud. This is no longer an abstract initiative but is actually happening, and rollouts may vary from one organization to another, but the reality is that business growth initiatives are forcing IT to evolve their overall environments to support these initiatives. And if they’re not, there’s a problem.
Businesses are moving at lightning speed. Today, the competitive landscape for any industry is aggressive. Organizations are looking to up their game, creating new growth initiatives, and leveraging technology platforms to do this. There are so many resources at their fingertips (public cloud services from AWS, etc.), that they can essentially bypass an IT department, and if savvy enough, use external resources for their needs. The bottom line is, if IT can’t do it fast enough, then IT becomes less relevant to the business.
This week at ISSCC, Intel made its first detailed public disclosures about its upcoming “Poulson” next-generation Itanium CPU. While not in any sense complete, the details they did disclose paint a picture of a competent product that will continue to keep the heat on in the high-end UNIX systems market. Highlights include:
Process — Poulson will be produced in a 32 nm process, skipping the intermediate 45 nm step that many observers expected to see as a step down from the current 65 nm Itanium process. This is a plus for Itanium consumers, since it allows for denser circuits and cheaper chips. With an industry record 3.1 billion transistors, Poulson needs all the help it can get keeping size and power down. The new process also promises major improvements in power efficiency.
Cores and cache — Poulson will have 8 cores and 54 MB of on-chip cache, a huge amount, even for a cache-sensitive architecture like Itanium. Poulson will have a 12-issue pipeline instead of the current 6-issue pipeline, promising to extract more performance from existing code without any recompilation.
Compatibility — Poulson is socket- and pin-compatible with the current Itanium 9300 CPU, which will mean that HP can move more quickly into production shipments when it's available.
How do you schedule your service desk staff to ensure excellent staffing and achieve service-level targets? Does your service desk solution cover this?
The effective staffing of service desk analysts can be complicated. Leveraging historic volume levels for all of the communication channels is one way to plan ahead. Additionally, having insight into planned projects from other groups — e.g., upgrades of applications or other planned releases — is important as well to plan ahead.
Service desk teams should start automating the workforce management process as much as possible in order to meet the customers’ expectations. Some service desk solutions have the workforce management as part of their functionalities already. If this is a challenge for you today — make sure that you include this key requirement into your functionality assessment list. Use the ITSM Support Tools Product Comparison tool for your assessment.
In the past week I have been briefed by one vendor who has incorporated workforce management into their solution. helpLine 5.1 Workforce Management allows for optimized planning of the service desk team.
With all the articles written about IPv4 addresses running out, Forrester’s phone lines are lit up like a Christmas tree. Clients are asking what they should do, who they should engage, and when they should start embracing IPv6. Like the old adage “It takes a village to raise a child,” Forrester is only one component; therefore, I started to compile a list of vendors and tactical documentation links that would help customers transition to IPv6. As I combed through multiple sites, the knowledge and documentation chasm between vendors became apparent. If the vendor doesn’t understand your business goals or have the knowledge to solve your business issues, are they a good partner? Are acquisition and warranty costs the only or largest considerations to making a change to a new vendor? I would say no.
Support documentation and availability to knowledge is especially critical in networking design, deployment, maintenance, and upgrades. Some pundits have relegated networking to a commodity play, but networking is more than plumbing. It’s the fabric that supports a dynamic business connecting users to services that are relevant to the moment, are aggregated at the point of use, and originate from multiple locations. The complexity has evolved from designing in a few links to tens of hundreds of relationships (security, acceleration, prioritization, etc.) along the flow of apps and data through a network. Virtualization, convergence, consolidation, and the evolving data center networks are prime examples of today’s network complexity. In response to this complexity, architects and practitioners turn to books, training materials, blogs, and repositories so that they can:
Set up an infrastructure more quickly or with a minimal number of issues, since there is a design guide or blueprint.
Since its introduction of its Core 2 architecture, Intel reversed much of the damage done to it by AMD in the server space, with attendant publicity. AMD, however, has been quietly reclaiming some ground with its 12-core 6100 series CPUs, showing strength in benchmarks that emphasize high throughput in process-rich environments as opposed to maximum performance per core. Several AMD-based system products have also been cited by their manufacturers to us as enjoying very strong customer acceptance due to the throughput of the 12-core CPUs combined with their attractive pricing. As a fillip to this success, AMD this past week announced speed bumps for the 6100-series products to give a slight performance boost as they continue to compete with Intel’s Xeon 5600 and 7500 products (Intel’s Sandy Bridge server products have not yet been announced).
But the real news last week was the quiet subtext that the anticipated 16-core Interlagos products based on the new Bulldozer core appear to be on schedule for Q2 ’11 shipments system partners, who should probably be able to ship systems during Q3, and that AMD is still certifying them as compatible with the current sockets used for the 12-core 6000 CPUs. This implies that system partners will be able to quickly deliver products based on the new parts very rapidly.
Actual performance of these systems will obviously be dependent on the workloads being run, but our gut feeling is that while they will not rival the per-core performance of the Intel Xeon 7500 CPUs, for large throughput-oriented environments with high numbers of processes, a description that fits a large number of web and middleware environments, these CPUs, each with up to a 50% performance advantage per core over the current AMD CPUs, may deliver some impressive benchmarks and keep the competition in the server space at a boil, which in the end is always helpful to customers.