Red Hat Summit – Can you say OpenStack and Containers?

Richard Fichera

In a world where OS and low-level platform software is considered unfashionable, it was refreshing to see the Linux glitterati and cognoscenti descended on Boston for the last three days, 5000 strong and genuinely passionate about Linux. I spent a day there mingling with the crowds in the eshibit halls, attending some sessions and meeting with Red Hat management. Overall, the breadth of Red Hat’s offerings are overwhelming and way too much to comprehend ina single day or a handful of days, but I focused my attention on two big issues for the emerging software-defined data center – containers and the inexorable march of OpenStack.

Containers are all the rage, and Red Hat is firmly behind them, with its currently shipping RHEL Atomic release optimized to support them. The news at the Summit was the release of RHEL Atomic Enterprise, which extends the ability to execute and manage containers over a cluster as opposed to a single system. In conjunction with a tool stack such as Docker and Kubernates, this paves the way for very powerful distributed deployments that take advantage of the failure isolation and performance potential of clusters in the enterprise. While all the IP in RHEL Atomic, Docker and Kubernates are available to the community and competitors, it appears that RH has stolen at least a temporary early lead in bolstering the usability of this increasingly central virtualization abstraction for the next generation data center.

Read more

Is ITIL Fit For Purpose For DevOps?

Amy DeMartine

I’ve been doing a lot of thinking about ITIL and whether or not it is fit for purpose for DevOps.  The logic I keep hearing goes like this - you shouldn't confuse the ITIL approach with the implementation; the ITIL approach is building blocks; these building blocks are easily applied to DevOps.  I’m not convinced.  First, ITIL is fundamentally time bound.  For example, ITIL v1 was primarily around applying mainframe disciplines into the emerging world of Client/Server, ITIL v2 was more about ensuring quality of output across complex operations environments and ITIL v3 was more about consolidating established operations principles and shifting the focus to “how does IT contribute to business value?”  Isn’t it a stretch to make best practices for previous waves of technology apply to DevOps whereby infrastructure and operations professionals are not silo’d but play an active part in delivering customer products and services along with application developers?  Second, ITIL zealots are convinced that these ITIL “best practices” are some kind of complex baking recipe and if all steps are not followed to the letter, the end result will be a failure.  This means that for many, the approach and the implementation of ITIL is tied.  This leads me to my question:  Is ITIL fit for purpose for DevOps?  To return to the analogy of building blocks, let’s use the ultimate of building blocks – Legos.  When I think about ITIL and service management, what most enterprises have implemented, looks like this:

Read more

Houston We Have A Problem . . . In The Networking Community

Andre Kindness

I noticed an interesting phenomenon at Interop which sparked my theory on new network technologies. New network technology maturity and its adoption correlate directly to the five stages of loss: 1) denial; 2) anger; 3) bargaining; 4) depression; and 5) acceptance. For example, Interop break-out sessions on cloud and bring your own device (BYOD) now mostly seemed to be mainstream initiatives compared to other technologies, such as software defined network or network functions virtualized. In the mainstream initiative sessions, an aura of acceptance and even tinges of optimism reverberated throughout the room. Presenters spoke passionately and positively about their topics and reinforced the importance of: 

  • Teamwork. Courtney Kissler, Vice President of E-Commerce & Store Technologies at Nordstrom, shared with the audience that the new world is made up of a team of business product managers and mobile app and networking professionals, to name just a few groups working together under the initiative. There was the mentality that everyone is accountable and must work together as a team, helping each other to roll out a great application that will benefit the business.
Read more

What is DevOps?

Amy DeMartine

Everywhere I turn, I hear about how some product or service is geared towards DevOps.  It feels like the “cloud washing” we all just went through.  “Cloud washing” continues to cause problems as even today it remains difficult to understand how products and services really affect our ability to create and manage clouds and applications in the cloud.  This “DevOps washing” is causing the same problems and it becomes harder and harder to understand what DevOps really is and how it applies.  I spent a morning breakfast presentation just talking about the definition of DevOps with a group of technology management folks for over an hour! 

 

I’ve spent the past year being the Ops part of the Forrester DevOps story.  We have been hard at work and released a playbook called Modern Service Delivery (to match the Modern Application Delivery playbook coming from my Dev partner Kurt Bittner) and we are approaching the end of creating the foundation of the DevOps story from planning to optimization.  We define DevOps as:

 

“DevOps is a set of practices and cultural changes — supported by the right tools — that creates an automated software delivery pipeline, enabling organizations to win, serve, and retain customers.”

 

If you are serious about DevOps, you can cut through the noise of the “DevOps washing” and start with several practical tips to get you moving in the right direction:

Read more

Thoughts on Huawei 2015 – The Juggernaut Continues to Build

Richard Fichera

In late April I once again attended Huawei’s annual analyst meeting in Shenzen, China. As with my last trip to this event, I approached it with a mix of dread and curiosity – dread because it is a long tiring trip and doing business in China if you are dependent on Google services is at best a delicate juggling act, and curiosity because Huawei is one of the most interesting and poorly understood of the large technology companies in the world, especially here in North America.

I came away with reinforcement of my previous impressions that Huawei is an unapologetically Chinese company. Not a global company that happens to be Chinese, as Lenovo presents itself, but a Chinese company that is intent upon and is making progress toward becoming a major global competitor in multiple arenas where it is not dominant now while continuing to maximize its success in its strong domestic market. A year later, all the programs that were in motion at the end of 2014 are still in operation, and Y/Y results indicate that the overall momentum in areas where Huawei is building its franchise, particularly mobile and enterprise IT, are, If anything, doing even better than promised.

Read more

Is Your Company A Place Where Employees Grow And Thrive, Or Wither And Leave?

David Johnson

As Forrester's Customer Experience Index (CX Index™) proves, the key determiner of a company's success is customer satisfaction. We can also prove that there is a strong correlation between employee satisfaction and customer perception and opinion, which is more pronounced with those employees who have a greater impact on your customers. To improve customer satisfaction, these employees have to feel that they can succeed. If they can’t succeed, they will burn out, and burned out employees aren’t going to help your company win, serve and retain customers. Forrester believes that you as an I&O leader can play the decisive role in customer satisfaction, if you choose to.

Read more

Take Investment Protection Off Your List Of Evaluation Criteria

Andre Kindness

Historically I have not been a big fan of Interop for a variety of reasons. However Greg Ferro, George Stefanick, Ivan Pepelnjak, and Harvard Business Review IT Director Ken Griffin — to name a few — changed the breakout sessions experience for me. During Greg’s data center network session, he said something that was priceless. He was giving some guidance around refresh cycles and told the audience to not worry about investment protection. It was so refreshing to hear it. I wasn’t the only one nodding my head: Investment protection is hogwash.

Greg made the case that moving to a 3-year replacement cycle changes a customer’s buying and design criteria. Right now, customers have to guess what is going to happen over an 8- to 10-year cycle; this long term guess creates the desire to protect that amount of spending with “investment protection” features. By considering flexibility, growth, and scalability of the network over that period, customers lean towards chassis switches with ports, which can cost 5 to 10 times as much as a pair of 1RU switches with the same type of ports. By selecting chassis switches, Greg says customers have doubled or tripled their project cost.

However on a 3-year replacement cycle, customers can choose right-sized equipment (which is probably a 1RU switch), do less maintenance, and gain faster access to new features. In addition, the risk could be lower.  For example if bad decision was made, a company is only stuck with selection for 3 years. Or, the company can choose to replace the network earlier and take smaller hit on capital expense line than if the compay bought chassis switches.

Read more

Facebook and HP Show Different Visions for Web-scale

Richard Fichera

Recently we’ve had a chance to look again at two very conflicting views from HP and Facebook on how to do web-scale and cloud computing, both announced at the recent OCP annual event in California.

From HP come its new CloudLine systems, the public face of their joint venture with Foxcon. Early details released by HP show a line of cost-optimized servers descended from a conventional engineering lineage and incorporating selected bits of OCP technology to reduce costs. These are minimalist rack servers designed, after stripping away all the announcement verbiage, to compete with white-box vendors such as Quanta, SuperMicro and a host of others. Available in five models ranging from the minimally-featured CL1100 up through larger nodes designed for high I/O, big data and compute-intensive workloads, these systems will allow large installations to install capacity at costs ranging from 10 – 25% less than the equivalent capacity in their standard ProLiant product line. While the strategic implications of HP having to share IP and market presence with Foxcon are still unclear, it is a measure of HP’s adaptability that they were willing to execute on this arrangement to protect against inroads from emerging competition in the most rapidly growing segment of the server market, and one where they have probably been under immense margin pressure.

Read more

Intel Announces Xeon SOC – Seriously Raising the Bar for AMD and ARM Competition

Richard Fichera

Intel has made no secret of its development of the Xeon D, an SOC product designed to take Xeon processing close to power levels and product niches currently occupied by its lower-power and lower performance Atom line, and where emerging competition from ARM is more viable.

The new Xeon D-1500 is clear evidence that Intel “gets it” as far as platforms for hyperscale computing and other throughput per Watt and density-sensitive workloads, both in the enterprise and in the cloud are concerned. The D1500 breaks new ground in several areas:

It is the first Xeon SOC, combining 4 or 8 Xeon cores with embedded I/O including SATA, PCIe and multiple 10 nd 1 Gb Ethernet ports.

(Source: Intel)

It is the first of Intel’s 14 nm server chips expected to be introduced this year. This expected process shrink will also deliver a further performance and performance per Watt across the entire line of entry through mid-range server parts this year.

Why is this significant?

With the D-1500, Intel effectively draws a very deep line in the sand for emerging ARM technology as well as for AMD. The D1500, with 20W – 45W power, delivers the lower end of Xeon performance at power and density levels previously associated with Atom, and close enough to what is expected from the newer generation of higher performance ARM chips to once again call into question the viability of ARM on a pure performance and efficiency basis. While ARM implementations with embedded accelerators such as DSPs may still be attractive in selected workloads, the availability of a mainstream x86 option at these power levels may blunt the pace of ARM design wins both for general-purpose servers as well as embedded designs, notably for storage systems.

Read more

Rack-Scale Architectures get Real with Intel RSA Introduction

Richard Fichera

What Is It?

We have been watching many variants on efficient packaging of servers for highly scalable workloads for years, including blades, modular servers, and dense HPC rack offerings from multiple vendors, most of the highly effective, and all highly proprietary. With the advent of Facebook’s Open Compute Project, the table was set for a wave of standardized rack servers and the prospect of very cost-effective rack-scale deployments of very standardized servers. But the IP for intelligently shared and managed power and cooling at a rack level needed a serious R&D effort that the OCP community, by and large, was unwilling to make. Into this opportunity stepped Intel, which has been quietly working on its internal Rack Scale Architecture (RSA) program for the last couple of years, and whose first product wave was officially outed recently as part of an announcement by Intel and Ericsson.

While not officially announcing Intel’s product nomenclature, Ericsson announced their “HDS 8000” based on Intel’s RSA, and Intel representatives then went on to explain the fundamental of RSA, including a view of the enhancements coming this year.

RSA is a combination of very standardized x86 servers, a specialized rack enclosure with shared Ethernet switching and power/cooling, and layers of firmware to accomplish a set of tasks common to managing a rack of servers, including:

·         Asset discovery

·         Switch setup and management

·         Power and cooling management across the servers with the rack

·         Server node management

Read more