The Relationship Between Dev-Ops And Continuous Delivery: A Conversation With Jez Humble Of ThoughtWorks

If you've been reading the research I've been writing over the past year, you know that I'm a fan of implementing an application life-cycle management strategy that focuses on increasing development flow and supports high-performance teams. You don't need to religiously implement all 22 CMMI processes or deliver dozens of intermediate development artifacts like some leading processes advocate. Rather, there are certain important processes that you should spend your time on. We wrote about change-aware continuous integration and just-in-time demand management in last year's Agile Development Management Tools Forrester Wave™. They are two of my favorite areas of focus, and they are great areas to invest in, but once you have them working well, there are other areas that will require your focus. In my opinion, the next process where you should focus on flow is everything that happens post build and preproduction. Most folks think about this process as release management or configuration management, but I think there's a better term that focuses on how quickly software changes move through both processes. It's called continuous delivery. When you focus on establishing a process of continuous delivery, you'll find that your capacity to release changes will increase, your null release cycle will shrink, and a larger proportion of the productivity gains you've seen from your Agile development efforts will flow through into production.

When you're talking about continuous delivery, you can count on Jez Humble, Principal Consultant at ThoughtWorks, having an opinion. After all, he literally wrote the book. I recently had a chance to speak with Jez in preparation for his speech at Forrester's Application Development & Delivery Forum, September 22-23, in Boston, about the relationship between continuous delivery and dev-ops, an issue Forrester has written about extensively and sometimes with some controversy

Q: The concepts behind continuous delivery seem to be getting a lot of press lately. Why?

I think a number of things have come together over the past couple of years. First, there are a number of startups that have demonstrated that deploying multiple times a day and creating stable and resilient services can actually be complementary rather than opposing goals. Second, there's increasing business pressure to deliver new functionality faster to an ever increasing number of devices. Third, there have been huge advances in the tools available to automate the delivery process, in areas such as data center automation, cloud and virtualization, test automation, and database management. Finally, we have demonstrated that it is possible to apply the techniques described in Continuous Delivery to large organizations, even those that are constrained by regulation. In fact, the book is the outcome of many accumulated person-years of helping enterprises get better at reliably releasing high-quality software.

Q: Is continuous delivery related to the dev-ops movement? How so?

Absolutely. In any organization where there is a separate operations department, and especially where there is an independent QA or testing function, we see that much of the pain in getting software delivered is caused by poor communication between these groups, exacerbated by an underlying cultural divide. Apps is measured according to throughput, and ops is measured according to stability. Testing gets it in the neck from both sides, and like release management, is often a political pawn in the fight between apps and ops. The point of dev-ops is that developers need to learn how to create high-quality, production-ready software, and ops needs to learn that Agile techniques are actually powerful tools to enable effective, low-risk change management. Ultimately, we're all trying to achieve the same thing - creating business value through software - but we need to get better at working together and focusing on this goal rather than trying to optimize our own domains. Unfortunately, many organizations aren't set up in a way that rewards that kind of thinking. 

Q: How much time and effort does it take to implement continuous delivery before you see a decent payback?

In general, I am very wary of committing to any piece of work that doesn't aim to create a measurable change in a few months. Of course, in order to achieve that kind of goal, you have to be careful to limit the scope of what you address, and, as Archimedes says, to find the right fulcrum. That usually means piloting it with a team that is doing something important, so that people care about what happens, but that has enough slack time to focus on things that might (mistakenly) be considered peripheral, such as implementing continuous integration. Continuous delivery encompasses a huge range of stuff, from acceptance test automation to data center and database change management, but there are two places where it's very common to see painful bottlenecks in the delivery process. One is the provisioning of production-like testing environments for performing integration and acceptance testing. The other is the code integration phases caused by teams who work on feature branches rather than continuously integrating, meaning that everyone merges into trunk or mainline at least once a day and that your team prioritizes keeping mainline releasable.

Two higher-level points to bear in mind: first, you have to create a measurable goal which has buy-in from your team - perhaps moving from releasing every three months to releasing every month. Second, ultimately implementing continuous delivery should be part of your existing continuous improvement process. Nobody is saying to drop everything you're doing and start up some kind of organizational "continuous delivery implementation project" or to create a new "dev-ops group" responsible for continuous delivery. That's totally missing the point.

Q: How much cultural change is needed in order to assure success?

That depends where you're starting from. The most important thing is for people to care about how they do their work and want to try to make things better (however good they may think they are already). Without a willingness to experiment (within limits), you are dead in the water. But given people who are both intellectually curious and disciplined about their methodology, I think it becomes quite easy to assure success, depending on how you define it. That might sound sneaky, but it isn't really - any kind of change that sticks proceeds in small, incremental steps that are driven by the scientific method: come up with a hypothesis, perform an experiment, see what the results are, and work out what to do next, sometimes known in management circles as the Deming cycle. This technique applies to cultural change as much as to any other kind of change. The key thing is that teams should meet regularly to come up with and evaluate hypotheses and experiments in the pursuit of getting better at what they do - a technique known as the retrospective - and that they should be given the power to perform these experiments. Success in this context means you learned something new and then acted based on what you learned.

Finally I want to say that it's important not to wait for change to be mandated from above and that cultural change doesn't need to be Big Bang. Even simple things like developers sitting down to eat together with the DBAs can help make things better.