Tired of Playing Ping-Pong with Dev, QA and Ops?

By Jez Humble

(Back to article)

Editor's Note: While Thoughtworks Studios is affiliated with the IT consultancy ThoughtWorks, it's aim is the development and marketing of automation tools to enable more Agile delivery of DevOps.

IT organizations are under ever increasing pressure to deliver software faster and more reliably. On the one hand, businesses are being squeezed by faster-moving competition. On the other hand, IT is often spending upwards of 70 percent of their budget on operations, much of which is spent maintaining mission critical systems on heterogeneous and legacy platforms.

As businesses attempt to become more responsive to change, project teams that have successfully adopted an agile development approach are requesting releases at an ever increasing frequency that IT operations simply cannot support without compromising quality and stability. Because of this, IT practitioners have begun to coalesce around a new approach to this problem that involves organizational change, as well as embracing new practices. This approach is known as DevOps.

At its heart, DevOps focuses on enabling rapid, reliable releases to users through stronger collaboration between everybody everyone involved in the solution delivery lifecycle. One of the important results of this collaboration is the application of agile practices to the work of operations, in particular aggressive automation of the build, deploy, test, and release processes for hardware and system environments.

In practice, DevOps means that owners, developers, testers and operations personnel collaborate on the evolution of systems throughout the service lifecycle. New releases are deployed to production-like environments (or even to production, depending on business needs) continuously throughout the development process. This requires operations people to work with developers from early on to automate provisioning, deployment and monitoring of environments and systems under development. Meanwhile testers work with developers -- throughout the delivery process -- to create comprehensive suites of automated regression tests that validate that every change to the system meets established business needs and can be deployed with minimal risk of defects or system failure.

While this approach has proven effective at improving the reliability and stability both of releases and of operational environments, it can pose several challenges to organizations wishing to adopt it:. First, it requires an organizational culture where developers, testers and operations teams collaborate throughout all phases of the service lifecycle. Second, it requires the adoption of agile techniques by operations teams such as aggressive automation, comprehensive environment configuration management and test-driven systems evolution. And, finally, it involves the adoption of new tools such as cloud computing stacks, test automation frameworks and systems monitoring tools.


How DevOps enables effective risk management

Many of these changes can be controversial in an organization. For a start, they appear to fly in the face of established control concepts such as segregation of duties and least privileges. These controls have been adopted from the accounting world as a way to meet regulations and standards such as Sarbanes-Oxley and PCI and are intended to reduce errors and fraud.

In the traditional view, the controls are usually applied manually and can become obstructive to the IT organization's ability to respond to the business’ needs due to onerous review and approval processes and functional silos that do not collaborate on solutions. However, the DevOps approach to delivery can apply automated processes with controls in an equally effective manner to manage risk and thus meet regulatory goals, while still enabling frequent releases of new functionality to IT systems that deliver value to the business.

Automating the build, deploy, test, and release process means that changes to your systems can be checked in to your version control system and taken through to production in a completely automated fashion using a pattern called the deployment pipeline. The automated scripts for configuration and build of an environment become part of the code base, going through the same checks and balances applied to application source code (a technique known as “infrastructure as code”).

Using this pattern, the configuration and build of environments is under the same controls that are applied to application code, such as approval to do the work, approval on the requirements, approval on test results and approval to deploy the system to production.

As “owners” of the technical side of the systems, the operations team works with the development team to continuously improve the systems to ensure all releases can be deployed with minimal defects and no manual processes.


DevOps enables continuous delivery

In a recent Forrester report, “Five Ways to Streamline Release Management,” Jeffrey Hammond reveals that survey respondents “rated their release environments with an average score of 5.19 [out of 10]”. The root cause of these problems is often the fact that development, test and operations teams are separated, and often incentivized based on metrics that put them into conflict.

It is common for developers to be measured on how fast they complete features, but not on the quality of delivered code. Features can be “dev complete” but buggy and undeployable. Testers are often measured according to how many bugs they find, but not on the completeness and effectiveness of the tests completed. A release can pass all tests completed, but still have significant defects.

Operations teams are measured on the stability of the production environment, but not on their ability to support the rest of the IT service pipeline. They are usually so busy firefighting issues resulting from changes that they are unable to accept new releases and build new environments -- let alone work with developers to make sure the software is deployable and maintainable.

Thus, multiple barriers are created, which work to prevent IT from releasing new functionality rapidly and reliably in response to the business needs.

Many organizations have already discovered the benefits of having developers and testers work together, and demonstrated that removing the “checks and balances” that a separate testing organization supposedly provides actually leads to higher quality systems. The same mentality needs to extend out to operations. Instead of separating the functions involved in delivering software and requiring them to optimize locally, measure the cycle time from concept to cash and require the whole organization to optimize for this metric.


How do I implement DevOps?

Start thinking about your portfolio of strategic IT services as a set of products. These products have owners, customers and a team that manages, develops and operates them throughout their lifecycle. It’s instructive to examine Eric Ries’ work on lean startups to consider how products evolve over time.

Ries argues that teams should focus on rapidly creating a minimum viable product and then pivot over time by continuously delivering new functionality based on feedback from real customers. In order to deliver these products, teams need to be multi-disciplinary, including developers, testers, operations people and managers; changing in composition over time to meet changing needs.

Along with other teams, the operations group also needs to change the way it works.

In a DevOps world, the operations team provides infrastructure as a service to product teams, such as the ability to spin up production-like environments on demand for testing and release purposes, and manage them programmatically. Operations is still responsible for sourcing hardware, monitoring performance and managing capacity and continuity for the infrastructure they provide -- although not necessarily for the systems that run on them, which belong to product teams. By applying DevOps practices, it becomes much easier for the ops team to stay ahead of requests and improve the services they provide. It was Amazon’s focus on this imperative Amazon’s focus on this imperative that led to the creation of a system so compelling they were able to offer it externally in the form of Amazon Web Services.

Applying a DevOps approach does not need to be a big-bang transformation. Start small by having operations people attend project inceptions, retrospectives and showcases. Get developers to rotate through operations departments and experience the pain of trying to keep systems running. Put screens with operational dashboards up in development rooms. Map and measure the value stream from requirements to production to discover the bottlenecks in your delivery process. Write a few automated scripts for environment builds and deployment in a low risk areas (development and testing environments) to get your feet wet.

Most importantly, make sure that operations teams have the tools and slack they need to stop spending all their time fire fighting, and focus on strategic work such as using root cause analysis to drive continuous improvement.

Jez Humble would like to thank Joanne Molesky and Jim Highsmith for feedback on the development of this article.

Jez Humble is a principal for ThoughtWorks Studios, the products division of Agile consultancy ThoughtWorks. He is responsible for helping enterprise organizations deliver quality software faster and more reliably through automation of the delivery process and better collaboration between development, testing and operations. He serves as product manager for Go, the company’s Agile release management product and is the author of the highly acclaimed book, Continuous Delivery. You can reach Jez with questions at jez@thoughtworks.com.


Further reading

Bottcher, Evan, “Projects are Evil and Must Be Destroyed”

Haight, Cameron, “DevOps: Born in the Cloud and Coming to the Enterprise”, Gartner Research 2010

Hammond, Jeffrey, “Five Ways To Streamline Release Management”, Forrester Research 2011

Humble, Jez, and David Farley, Continuous Delivery, Addison-Wesley 2010

Humble, Jez, “Continuous Delivery: The Value Proposition” Poppendieck, Mary, and Tom Poppendieck, Leading Lean Software Development, Addison-Wesley 2009

Ries, Eric, “The Lean Startup” The Institute of Internal Auditors