Part 1: Standards And Proprietary Technology: A Time And Place For Both

I was listening to a briefing the other day and got swept up in a Western melodrama, set against the backdrop of Calamity Jane’s saloon in Deadwood Gulch, South Dakota, revolving around three major characters: helpless heroine (the customer); valiant hero (vendor A, riding a standards-based white horse); and the scoundrel villain (a competitor, riding the proprietary black stallion) (insert boo and hiss). Vendor A tries to evoke sympathy at his plight of not offering the latest features because he doesn’t have the same powers as the villain and has chosen to follow the morally correct path, which is filled with prolonged and undeserved suffering supporting standards-based functions. What poppycock! There is no such thing as good and evil in networking. If the vendors were reversed in positions, vendor A would be doing the same thing as its competitors. Every vendor has some type of special sauce to differentiate themselves. Anyway, it’s business, plain and simple; networking fundamentally needs proprietary and standards-based features. However, there’s a time and place for both.  

With that in mind, I want to let you know that I’m a big proponent of standards-based networking. The use of open standards improves the choices that help you reduce risk, implement durable solutions, obtain flexibility, and benefit from quality. Ninety-plus percent of networking should be leveraging standard protocols, but to get to that point features need to go through three stages:

  • Stage 1: Innovation. In general, features and functionality are born out of necessity; vendors will tackle this endeavor if they know this will increase their bottom line. For example, Cisco created Cisco Discovery Protocol (CDP) in 1994 to help network managers automate network deployments and differentiate their solutions over leading networking providers Cabletron and 3Com.
  • Stage 2: Crossing the Chasm. Ultimately, the market chooses the best technology to solve business issues. Some technology lives on, like IP/TCP, many disappear: IPX/SPX, NetBios, 100BaseVG.
  • Stage 3: Standards. Since the standards bodies’ (IEEE, IETF, ISO, etc) members are volunteers from academia, vendors, and end users with only a limited set of resources, they have to use their resources wisely. With a business case in hand, the standards bodies will then work to standardize the technology, often with the help of an innovator. Xerox worked with Dec and Intel to promote Ethernet as standard.  

Fundamentally, data center transformation is shaking up the industry, and there is a ton of innovation occurring within data center networking. I&O managers tasked with network refreshes should keep these two things in mind:

  • Data center networking is at the beginning of transformation. It will take some time for the standards bodies to standardize features and for vendors to implement them (VEPA, TRILL, 802.1aq, 802.1 DCB). This means solutions are highly proprietary (IRF, vPC, VCS) and network refresh criteria should include vendor road maps to standards-based features. Refer to Networking Frequently Asked Questions for more details and suggestions.  
  • Networks outside the data center should leverage standards-based protocols. Don’t be squeezed or pressured into purchasing from the same vendor because it’s the only one to support x, y, and z. More than likely, if you’ve had that vendor’s products for a while, someone, including that vendor, has probably implemented standards-based features.

Stay tuned for a document I’ll be publishing on this in the next few weeks. It’s an FAQ that tackles innovation, standards, dual-sourcing your network, key vendors, and other common network concerns. In the meantime, I’m always happy to get your thoughts on the matter.


Innovation outside of the data center

Just to be the devils advocate ... Areas that are rapidly innovating will be full of proprietary technologies until the market identifies and standardizes the best or most popular and standardizes them. Outside of the data center, innovation is slower, but there are still several important technologies that dont have a good standards based implementation. Some technologies that are closely related are NAP/NAC, Network Virtualization and QOS. Looking at NAC/NAP as an isolated function, one might recommend a NAP/NAC solution that does not require infrastructure dependencies which uses standards to implement edge control. But it is important to consider the impact that NAP/NAC/LLDP could have on the configuration of the edge device. For example is network virtualization to the edge going to consist exclusively of VLAN assignment, or will more complicated network virtualization schemes become increasingly popular, and do the current NAP/LLDP/802.1x standards have the extensibility to include edge device information that could impact the configuration of the edge switch? QOS is one of the configurations that may be affected by the capabilities of the edge device. Currently As I see it, QOS could become more, or less significant in the future. There are lots of new applications that could require a more advanced QOS implementation than what is typical in most enterprises with VoIP deployments today. New collaboration technologies, increase in video, desktop video-conferencing, VDI, PCoIP and other apps could result in a significantly more complex QOS implementation. The counter argument is that enterprises should consider their DC to be a cloud and try to ensure that all types of clients, including tele-commuters and mobile workers have an equivalent experience which could counter a pro-QOS argument. But assuming that more advanced QOS features will become more widely implemented in the enterprise, vendors that offer advanced capabilities in identifying and classifying traffic from edge devices, and automatically applying QOS policies could be a compelling feature. Extending traffic classification and treatment automatically to the wan edge and to WAN optimization appliances can also be compelling, which is yet more potential for proprietary interconnection.

Also switch fabric virtualization for high availability, technologies such as VRRP-E. VSS, and Virtual Chassis technology and others are still highly proprietary. Some of these are locally significant to a few devices which can reduce vendor lock-in, but many of these technologies are expanding to encompass fabric virtualization across larger segments of the campus. Vendors such as raptor networks promote architectures where virtual fabrics extend across entire campus and multi-site topologies.

I have been a long-time believer that the most significant disruption in campus infrastructure could be drastic simplification in management of infrastructure. There are numerous disparate management technologies that when combined could revolutionize the way that campus networks are deployed and managed, but today they are highly fragmented, complicated to implement and expensive. Regardless of the specific technology, I think there is a lot of room for innovation in how networks are deployed and managed, and though there is no great solution being offered today. This innovation is occurring in data center infrastructure management today, but the management paradigms being developed within the data center will need to extend and adapt to encompass the entire network, and in the process many valuable proprietary approaches could emerge.

The argument for the homogeneous infrastructure is also compelling as user and traffic management, classification, and treatment needs to extend across multiple different classes of devices, and the immaturity of technology and standards in this area leaves a lot of room for vendors that offer homogeneity to offer compelling benefits. Enterprises and standards bodies need to focus on standards development and adoption more in these areas.