SMARTnet Is Dead! Long Live The Lifetime Warranty!

Just kidding, Cisco’s SMARTnet isn’t dead, but I&O managers have a new warranty for networking hardware: free hardware replacement, bug fixes, and tech support. Basically, enterprises can expect to get a basic break-and-fix solution free from most vendors on edge and distribution switches or switch/routers. Hallelujah!

Everyone owes a big thank-you to HP. Over the past 10 years, while holding less than 5% of the market, HP’s ProCurve line forced its competitors’ hands, reset the industry’s warranty choices, and revolutionized what customers should expect from their networking vendors. By leveraging the lifetime warranty to separate themselves from the other seven dwarfs and Gigantor while trying to offset “you get what you pay for,” HP went to market offering next business day replacement on the hardware, phone and email support, along with software bug fixes and updates. They wanted customers to understand that only companies that delivered quality products could sustain this type of service model. HP extended the warranty out to some of the 3Com/H3C products -- after the acquisition -- too.

Within the past two years, most vendors have followed suit and offered their version of a lifetime warranty:

Read more

What Do Spark Plugs And WLAN Solutions Have In Common?

It’s not the most daring and cutting-edge prediction to say 2011 will be Wi-Fi’s second coming. However, you might be caught off guard when I tell you to not worry about a vendor’s WLAN architecture. Your business needs will flush out the right one. Despite the initial hype seven years ago that Wi-Fi was going to be the new edge, it’s been the second choice for most users to connect with at work — but that will change. A tidal wave of wireless devices will be crashing through the enterprise front door very soon. Just look at the carriers scrambling to build out their infrastructure — there’s no shortage of stories about AT&T and their build-out of Wi-Fi in metropolitan areas. And users have fused their work and personal phones and are looking to seek coverage from carrier data plans.

The time to start was yesterday, and you have a ton of work to do. Your edge will be servicing:

  • Employees with corporate netbooks and their own smartphones and/or tablets who watch training videos on YouTube from companies like VMware.
  • Devices like torque tools, temperature sensors in exothermic chambers, ambient light sensors, and a myriad other devices.
  • Contractors with their own laptops, netbooks, tablets, and/or smartphones who need access to specific company applications.
  • Guests like account executives entering customer information into their CRM programs.
  • All the things being developed at venture capital backed incubators.
Read more

Juniper’s QFabric: The Dark Horse In The Datacenter Fabric Race?

It’s been a few years since I was a disciple and evangelized for HP ProCurve’s Adaptive EDGE Architecture(AEA). Plain and simple, before the 3Com acquisition, it was HP ProCurve’s networking vision: the architecture philosophy created by John McHugh(once HP ProCurve’s VP/GM, currently the CMO of Brocade), Brice Clark (HP ProCurve Director of Strategy), and Paul Congdon (CTO of HP Networking) during a late-night brainstorming session. The trio conceived that network intelligence was going to move from the traditional enterprise core to the edge and be controlled by centralized policies. Policies based on company strategy and values would come from a policy manager and would be connected by high speed and resilient interconnect much like a carrier backbone (see Figure 1). As soon as users connected to the network, the edge would control them and deliver a customized set of advanced applications and services based on user identity, device, operating system, business needs, location, time, and business policies. This architecture would allow Infrastructure and Operation professionals to create an automated and dynamic platform to address the agility needed by businesses to remain relevant and competitive.

As the HP white paper introducing the EDGE said, “Ultimately, the ProCurve EDGE Architecture will enable highly available meshed networks, a grid of functionally uniform switching devices, to scale out to virtually unlimited dimensions and performance thanks to the distributed decision making of control to the edge.” Sadly, after John McHugh’s departure, HP buried the strategy in lieu of their converged infrastracture slogan: Change.

Read more

Don’t Underestimate The Value Of Information, Documentation, And Expertise!

With all the articles written about IPv4 addresses running out, Forrester’s phone lines are lit up like a Christmas tree. Clients are asking what they should do, who they should engage, and when they should start embracing IPv6. Like the old adage “It takes a village to raise a child,” Forrester is only one component; therefore, I started to compile a list of vendors and tactical documentation links that would help customers transition to IPv6. As I combed through multiple sites, the knowledge and documentation chasm between vendors became apparent. If the vendor doesn’t understand your business goals or have the knowledge to solve your business issues, are they a good partner? Are acquisition and warranty costs the only or largest considerations to making a change to a new vendor? I would say no.

Support documentation and availability to knowledge is especially critical in networking design, deployment, maintenance, and upgrades. Some pundits have relegated networking to a commodity play, but networking is more than plumbing. It’s the fabric that supports a dynamic business connecting users to services that are relevant to the moment, are aggregated at the point of use, and originate from multiple locations. The complexity has evolved from designing in a few links to tens of hundreds of relationships (security, acceleration, prioritization, etc.) along the flow of apps and data through a network. Virtualization, convergence, consolidation, and the evolving data center networks are prime examples of today’s network complexity. In response to this complexity, architects and practitioners turn to books, training materials, blogs, and repositories so that they can:

  • Set up an infrastructure more quickly or with a minimal number of issues, since there is a design guide or blueprint.
Read more

Networks Are About The Users, Not The Apps!

Virtualization and cloud talk just woke the sleeping giant, networking. For too long, we were so isolated in our L2-L4 world and soundly sleeping as VMs were created and a distant cousin was born, vSwitches. Sure, we can do a little of this and little of that in this virtual world, but the reality is everything is very manually driven and a one-off process. For example, vendors talk about moving policies from one port to another when a VM moves, but they don’t discuss policies moving around automatically on links from edge switches to the distribution switches. Even management tools are scrambling to solve issues within the data center. In this game of catch, I’m hearing people banter the word “app” around. Server personnel to networking administrators are trying to relate to an app. Network management tools, traffic sensors, switches, wan optimization are being developed to measure, monitor, or report on the performance of apps in some form or another.

Why is “app” the common language? Why are networks relating to “apps”? With everything coming down the pike, we are designing for yesterday instead of tomorrow. Infrastructure and operations professionals will have deal with:

  • Web 2.0tools. Traditional apps can alienate users when language and customs aren’t designed into the enterprise apps, yet no one app can deal with sheer magnitude of languages. Web 2.0 technologies — such as social networking sites, blogs, wikis, video-sharing sites, hosted services, web applications, mashups, and folksonomies — connect people with each other globally to collaborate and share information, but in a way that is easily customized and localized. For example, mashups allow apps to be easily created in any language and data sourced from a variety of locations.
Read more

What Is The Cost Of Being Blind, Insecure, And Unmanaged?

With the increased presence of business principles within the IT arena, I get a lot of inquiries from  Infrastructure & Operations Professionals who are trying to figure out how to justify their investment in a particular product or solution in the security, monitoring, and management areas. Since most marketing personnel view this either as a waste of resources in a futile quest of achievement or too intimidating to even begin to tackle, IT vendors have not provided their customers more than marketing words:  lower TCO, more efficient, higher value, more secure, or more reliable. It’s a bummer since the request is a valid concern for any IT organization. Consider that other industries -- nuclear power plants, medical delivery systems, or air traffic control -- with complex products and services look at risk and reward all the time to justify their investments. They all use some form of probabilistic risk assessment (PRA) tools to figure out technological, financial, and programmatic risk by combining it with disaster costs: revenue losses, productivity losses, compliance and/or reporting penalties, penalties and loss of discounts, impact to customers and strategic partners, and impact to cash flow.

PRA teams use fault tree analysis(FTA) for top-down assessment and failure mode and effect analysis(FMEA) for bottom-up. 

Read more

IPv6: Drive Innovation With Rewards, Not Fear

I’m a sucker for good, biting humor, and in the spirit of Stephen Colbert’s Medals of Fear that he gave to a few distinguished souls (the press, Mark Zuckerberg, Anderson Cooper) at the rally in Washington D.C., I would like to hand a medal to the U.S. State Department for its 1999 publication of a country-by-country set of "Y2K" warnings — “End of Days” scenarios and solutions — for Americans doing business in 194 nations. I would give another medal to IPv6, the most drawn-out killer technology to date — and one that has had the longest run at trying to scare everyone about the end of IPv4. At Forrester, we are starting to see the adoption freighter slowly turning via the number of inquiries rolling in; governments accelerating their adoption with new mandates; vendors including IPv6 in their solutions; and the Number Resource Organization escalating its announcements about the depletion of IPv4 addresses (only 5% left!). To add to the drama, vendors are in the process of creating IPv4 address countdown clocks to generate buzz and differentiation. These scare tactics haven’t worked because technology pundits haven’t spoken about IPv6 in business terms. There is enormous business value in IPv6; those who embrace it will be the new leaders in their space.

Read more

Juniper: Reading The Writing On The Wall

Like the polar ice caps, the traditional edge of the network — supporting desktops, printers, APs, VoIP phones — is eroding and giving way to a virtual edge. With the thawing of IT spending, growth and availability of physical edge ports isn’t keeping up with devices connecting to the network; 802.11 and cellular will be the future of most connections for smartphones, notebooks, tablets, HVAC controls, point of sale, etc.

Read more

Part 2: Three FUD Statements I&O Managers Use Not To Implement Standards-Based Networking

Carrying on from my thoughts in Part 1:  It’s time to start deploying purely standards-based infrastructure outside the data center; data center protocols are just starting to be created for a converged and virtualized world.  With the amount of tested and deployed standards protocols, there’s no excuse for networks to be locked in to a certain vendor with proprietary protocols when standards-based network solutions provide access to compelling volume economics, flexibility to adapt a much wider array of solutions, and relief from hiring specialized talent to run a science project.  Although many organizations understand that standards-based networking provides them with the flexibility to choose from the best available solutions at a lower cost of ownership, they still feel trapped.  Listed below are three top shackles and the keys to open them up:

Read more

Part 1: Standards And Proprietary Technology: A Time And Place For Both

I was listening to a briefing the other day and got swept up in a Western melodrama, set against the backdrop of Calamity Jane’s saloon in Deadwood Gulch, South Dakota, revolving around three major characters: helpless heroine (the customer); valiant hero (vendor A, riding a standards-based white horse); and the scoundrel villain (a competitor, riding the proprietary black stallion) (insert boo and hiss). Vendor A tries to evoke sympathy at his plight of not offering the latest features because he doesn’t have the same powers as the villain and has chosen to follow the morally correct path, which is filled with prolonged and undeserved suffering supporting standards-based functions. What poppycock! There is no such thing as good and evil in networking. If the vendors were reversed in positions, vendor A would be doing the same thing as its competitors. Every vendor has some type of special sauce to differentiate themselves. Anyway, it’s business, plain and simple; networking fundamentally needs proprietary and standards-based features. However, there’s a time and place for both.  

With that in mind, I want to let you know that I’m a big proponent of standards-based networking. The use of open standards improves the choices that help you reduce risk, implement durable solutions, obtain flexibility, and benefit from quality. Ninety-plus percent of networking should be leveraging standard protocols, but to get to that point features need to go through three stages:

Read more