Who Are Your Anchor Vendors?

Glenn O'Donnell

Every day we read about technology vendors making acquisitions and merging with their competitors. Some recent examples: Verizon acquired Terremark for $1.4B to take a leadership role in IaaS, NetApp acquired Akorri to move up the virtualization stack, and the highly popularized "storage shoot out" in late 2010 between Dell and HP for 3PAR (ending with HP’s winning bid of $2.4B). Since there is no evidence to suggest a decrease in the pace of these acquisitions, it’s important for infrastructure and operations (I&O) professionals to keep a keen eye on these proceedings. 

Read more

Categories:

If You Don’t Manage Everything, You Don’t Manage Anything

Jean-Pierre Garbani

I’m always surprised to see that the Citroen 2CV (CV: Cheval Vapeur, hence the name Deux Chevaux) has such a strong following, even in the United States. Granted, this car was the epitome of efficiency: It used minimum gas (60 miles to the gallon), was eminently practical, and its interior could be cleaned with a garden hose. Because the car was minimalist to the extreme, the gas gauge on the early models was a dipstick of some material, with marks to show how many liters of gas were left in the tank. For someone like me, who constantly forgot to consult the dipstick before leaving home, it meant that I would be out of gas somewhere far from a station almost every month. A great means of transportation failed regularly for lack of instrumentation. (Later models had a gas gauge.)

 This shows how failure to monitor one element leads to the failure of the complete system — and that if you don’t manage everything you don’t manage anything, since the next important issue can develop in blissful ignorance.

The point is that we often approach application performance management from the same angle that Citroen used to create the 2CV: provide only the most critical element monitoring in the name of cost-cutting. This has proved time and again to be fraught with risk and danger. Complex, multitier applications are composed of a myriad of components, hardware and software, that can fail.

In application performance management, I see a number of IT operations focus their tools on some critical elements and ignore others. But even though many of the critical hardware and software components have become extremely reliable, it doesn’t mean that they are impervious to failure: There is simply no way to guarantee the life of a specific electronic component.

Read more

Cisco Sends A Recall On Its Cloud Email Strategy

Christopher Voce

Infrastructure & operations executives have shown a tremendous interest in looking for opportunities to take advantage of the cloud to provision email and collaboration services to their employees – in fact in a recent Forrester survey, nearly half of IT execs report that they either are interested in or plan on making a move to the cloud for email. Why? It can be more cost effective, increase your flexibility, and help control the historical business and technical challenges of deploying these tools yourself.  

To date, we’ve talked about four core players in the market : Cisco, Google, IBM, and Microsoft. According to a recent blog post, Cisco has chosen to no longer invest in Cisco Mail. Cisco Mail was formerly known as WebEx Mail – and before that, the email platform was the property of PostPath, which Cisco acquired in 2008 with the intention of providing a more complete collaboration stack alongside its successful WebEx services and voice.  I've gathered feedback and worked with my colleagues Ted Schadler, TJ Keitt, and Art Schoeller to synthesize and discuss what this means to Infrastructure & Operations pros and coordinating with their Content & Collaboration colleagues.

 So what happened and what does it mean for I&O professionals? Here’s our take:

Read more

Juniper’s QFabric: The Dark Horse In The Datacenter Fabric Race?

Andre Kindness

It’s been a few years since I was a disciple and evangelized for HP ProCurve’s Adaptive EDGE Architecture(AEA). Plain and simple, before the 3Com acquisition, it was HP ProCurve’s networking vision: the architecture philosophy created by John McHugh(once HP ProCurve’s VP/GM, currently the CMO of Brocade), Brice Clark (HP ProCurve Director of Strategy), and Paul Congdon (CTO of HP Networking) during a late-night brainstorming session. The trio conceived that network intelligence was going to move from the traditional enterprise core to the edge and be controlled by centralized policies. Policies based on company strategy and values would come from a policy manager and would be connected by high speed and resilient interconnect much like a carrier backbone (see Figure 1). As soon as users connected to the network, the edge would control them and deliver a customized set of advanced applications and services based on user identity, device, operating system, business needs, location, time, and business policies. This architecture would allow Infrastructure and Operation professionals to create an automated and dynamic platform to address the agility needed by businesses to remain relevant and competitive.

As the HP white paper introducing the EDGE said, “Ultimately, the ProCurve EDGE Architecture will enable highly available meshed networks, a grid of functionally uniform switching devices, to scale out to virtually unlimited dimensions and performance thanks to the distributed decision making of control to the edge.” Sadly, after John McHugh’s departure, HP buried the strategy in lieu of their converged infrastracture slogan: Change.

Read more

Intel Discloses Details on “Poulson,” Next-Generation Itanium

Richard Fichera

This week at ISSCC, Intel made its first detailed public disclosures about its upcoming “Poulson” next-generation Itanium CPU. While not in any sense complete, the details they did disclose paint a picture of a competent product that will continue to keep the heat on in the high-end UNIX systems market. Highlights include:

  • Process — Poulson will be produced in a 32 nm process, skipping the intermediate 45 nm step that many observers expected to see as a step down from the current 65 nm Itanium process. This is a plus for Itanium consumers, since it allows for denser circuits and cheaper chips. With an industry record 3.1 billion transistors, Poulson needs all the help it can get keeping size and power down. The new process also promises major improvements in power efficiency.
  • Cores and cache — Poulson will have 8 cores and 54 MB of on-chip cache, a huge amount, even for a cache-sensitive architecture like Itanium. Poulson will have a 12-issue pipeline instead of the current 6-issue pipeline, promising to extract more performance from existing code without any recompilation.
  • Compatibility — Poulson is socket- and pin-compatible with the current Itanium 9300 CPU, which will mean that HP can move more quickly into production shipments when it's available.
Read more

Staffing Your Service Desk Analysts

Eveline Oehrlich

 

Question:

How do you schedule your service desk staff to ensure excellent staffing and achieve service-level targets? Does your service desk solution cover this?

Answer:

The effective staffing of service desk analysts can be complicated. Leveraging historic volume levels for all of the communication channels is one way to plan ahead. Additionally, having insight into planned projects from other groups — e.g., upgrades of applications or other planned releases — is important as well to plan ahead. 

Service desk teams should start automating the workforce management process as much as possible in order to meet the customers’ expectations. Some service desk solutions have the workforce management as part of their functionalities already. If this is a challenge for you today — make sure that you include this key requirement into your functionality assessment list. Use the ITSM Support Tools Product Comparison tool for your assessment. 

In the past week I have been briefed by one vendor who has incorporated workforce management into their solution. helpLine 5.1 Workforce Management allows for optimized planning of the service desk team.

Categories:

Don’t Underestimate The Value Of Information, Documentation, And Expertise!

Andre Kindness

With all the articles written about IPv4 addresses running out, Forrester’s phone lines are lit up like a Christmas tree. Clients are asking what they should do, who they should engage, and when they should start embracing IPv6. Like the old adage “It takes a village to raise a child,” Forrester is only one component; therefore, I started to compile a list of vendors and tactical documentation links that would help customers transition to IPv6. As I combed through multiple sites, the knowledge and documentation chasm between vendors became apparent. If the vendor doesn’t understand your business goals or have the knowledge to solve your business issues, are they a good partner? Are acquisition and warranty costs the only or largest considerations to making a change to a new vendor? I would say no.

Support documentation and availability to knowledge is especially critical in networking design, deployment, maintenance, and upgrades. Some pundits have relegated networking to a commodity play, but networking is more than plumbing. It’s the fabric that supports a dynamic business connecting users to services that are relevant to the moment, are aggregated at the point of use, and originate from multiple locations. The complexity has evolved from designing in a few links to tens of hundreds of relationships (security, acceleration, prioritization, etc.) along the flow of apps and data through a network. Virtualization, convergence, consolidation, and the evolving data center networks are prime examples of today’s network complexity. In response to this complexity, architects and practitioners turn to books, training materials, blogs, and repositories so that they can:

  • Set up an infrastructure more quickly or with a minimal number of issues, since there is a design guide or blueprint.
Read more

AMD Bumps Its Specs, Waits For Interlagos And Bulldozer

Richard Fichera

Since its introduction of its Core 2 architecture, Intel reversed much of the damage done to it by AMD in the server space, with attendant publicity. AMD, however, has been quietly reclaiming some ground with its 12-core 6100 series CPUs, showing strength in  benchmarks that emphasize high throughput in process-rich environments as opposed to maximum performance per core. Several AMD-based system products have also been cited by their manufacturers to us as enjoying very strong customer acceptance due to the throughput of the 12-core CPUs combined with their attractive pricing. As a fillip to this success, AMD this past week announced speed bumps for the 6100-series products to give a slight performance boost as they continue to compete with Intel’s Xeon 5600 and 7500 products (Intel’s Sandy Bridge server products have not yet been announced).

But the real news last week was the quiet subtext that the anticipated 16-core Interlagos products based on the new Bulldozer core appear to be on schedule for Q2 ’11 shipments system partners, who should probably be able to ship systems during Q3, and that AMD is still certifying them as compatible with the current sockets used for the 12-core 6000 CPUs. This implies that system partners will be able to quickly deliver products based on the new parts very rapidly.

Actual performance of these systems will obviously be dependent on the workloads being run, but our gut feeling is that while they will not rival the per-core performance of the Intel Xeon 7500 CPUs, for large throughput-oriented environments with high numbers of processes, a description that fits a large number of web and middleware environments, these CPUs, each with up to a 50% performance advantage per core over the current AMD CPUs, may deliver some impressive benchmarks and keep the competition in the server  space at a boil, which in the end is always helpful to customers.

Is Infrastructure & Operations Vulnerable To Job Market Trends?

Jean-Pierre Garbani

A couple of weeks ago, I read that one of the largest US car makers was trying to buy out several thousand machinists and welders. While we have grown accustomed to bad news in this economy, what I found significant was that these were skilled workers. Personally, I find it a lot easier to write code than to weld two pieces of steel together, and I have tried both.

For the past 20 years, the job market in industrialized countries has shown a demand increase at the high and low ends of the wage and skill scale, to the detriment of the middle. Although it’s something that we may have intuitively perceived in our day-to-day lives, a 2010 paper by David Autor of MIT confirms the trend:

“. . . the structure of job opportunities in the United States has sharply polarized over the past two decades, with expanding job opportunities in both high-skill, high-wage occupations and low-skill, low-wage occupations, coupled with contracting opportunities in middle-wage, middle-skill white-collar and blue-collar jobs.”

One of the reasons for this bipolarization of the job market is that most of the tasks in the middle market are based on well-known and well-documented procedures that can be easily automated by software (or simply offshored). This leaves, at the high end, jobs that require analytical and decision-making skills usually based on a solid education, and at the low end, “situational adaptability, visual and language recognition, and in-person interactions. . . . and little in the way of formal education.”

Can this happen to IT? As we are fast-forwarding to an industrial IT, we tend to replicate what other industries did before us, that is remove the person in the middle through automation and thus polarize the skill and wage opportunities at both ends of the scale.

Read more

BSM Rediscovered

Jean-Pierre Garbani

I have in the past lamented the evolution of BSM into more of an ITIL support solution than the pure IT management project that we embarked on seven years ago. In the early years of BSM, we all were convinced of the importance of application dependency discovery: It was the bridge between the user, who sees an application, and IT, which sees infrastructures. We were all convinced that discovery tools should be embedded in infrastructure management solutions to improve them. I remember conversations with all big four product managers, and we all agreed at the time that the “repository” of dependencies, later to become CMDB, was not a standalone solution. How little we knew!

 What actually happened was the discovery tools showed a lot of limitations and the imperfect CMDB that was the result became the center of ITIL v2 universe. The two essential components that we saw in BSM for improving the breed of system management tools were quietly forgotten. These two major failures are: 1) real-time dependency discovery — because last month’s application dependencies are as good as yesterday’s newspaper when it comes to root cause analysis or change detection, and 2) the reworking of tools around these dependencies — because it added a level of visibility and intelligence that was sorely lacking in the then current batch of monitoring and management solutions. But there is hope on the IT operations horizon.

These past few days, I have been briefed by two new companies that are actually going back to the roots of BSM.

Neebula has introduced a real-time discovery solution that continuously updates itself and is embedded into an intelligent event and impact analysis monitor. It also discovers applications in the cloud.

Read more