A few months ago, I blogged about testing quality@speed in the same way that F1 racing teams do to win races and fans. Last week, I published my F(TA)1 Forrester Wave! It examines the capabilities of nine vendors to evaluate how they support Agile development and continuous delivery teams when it comes to continuous testing: Borland, CA Technologies, HP, IBM, Microsoft, Parasoft, SmartBear, TestPlant, and Tricentis. However, only Forrester clients can attend “the race” to see the leaders.
The market overview section of our evaluation complements the analysis in the underlying model by looking at other providers that either augment FTA capabilities, play in a different market segment, or did not meet one of the criteria for inclusion in the Forrester Wave. These include: 1) open source tools like Selenium and Sahi, 2) test case design and automation tools like Grid-Tools Agile Designer, and 3) other tools, such as Original Software, which mostly focuses on graphical user interface (GUI) and packaged apps testing, and Qualitia and Applitools, which focus on GUI and visualization testing.
We deliberately weighted the Forrester Wave criteria more heavily towards “beyond GUI” and API testing approaches. Why? Because:
Formula One has gotten us all used to amazing speed. In as little as three seconds, F1 pit teams replace all four wheels on a car and even load in dozens of liters of fuel. Pit stops are no longer an impediment to success in F1 — but they can be differentiating to the point where teams that are good at it win and those that aren’t lose.
It turns out that pit stops not only affect speed; they also maintain and improve quality. In fact, prestigious teams like Ferrari, Mercedes-Benz, and Red Bull use pit stops to (usually!) prevent bad things from happening to their cars. In other words, pit stops are now a strategic component of any F1 racing strategy; they enhance speed with quality. But F1 teams also continuously test the condition of their cars and external conditions that might influence the race.
My question: Why can’t we do the same with software delivery? Can fast testing pit stops help? Today, in the age of the customer, delivery teams face a challenge like none before: a business need for unprecedented speed with quality — quality@speed. Release cycle times are plummeting from years to months, weeks, or even seconds — as companies like Amazon, Netflix, and Google prove.
Within the modern applications era, regardless of whether new software applications are being developed and delivered for mobile, tablets, or the Web, the truly successful app-dev leaders will be those who focus on delivering constant value and incremental improvement to their business. That is a totally different perspective from “I need to keep control of my team’s productivity to make sure that we stick to our estimated costs, scope, and project dates.” Of course, the interest in cost is never going away, but app-dev leaders today have a great chance to enhance their conversation with executives and business stakeholders and add value to the conversation.
However, as the recent research I just published, Agile Metrics That Matter, proves, while some of the most advanced Agile teams do use new progress, quality, efficiency, and value/benefits metrics (these to a lesser degree), some software development industry luminaries have worked and are working on new methods to measure value in software development. But it’s still early days!
I’d like to summarize here some good old practices on establishing metrics that count together with some of the new findings of the research:
What a strange summer this has been! From Boston to London to Paris to Turin, the weather has offered weekly and even daily reversals, with continuous change from sun to rain, from hot and damp to cool and crisp. I missed a nice spring season. Even today, from 35º-38º Celsius (95º-100º Fahrenheit), we just went to 22º Celsius (71º Fahrenheit) with a perfect storm! A continuous climate and sudden change is quite unusual in some of these countries. Certainly it is where the Azores Anticyclone usually dominates from mid-late June to mid-late August, offering a stable summer. How many times have you had to change plans because you discover weather is about to change!?
You might be thinking, "What does this have to do with this AD&D blog?" It’s about change! I am wondering if, in our daily lives, getting used to unexpected conditions and having to handle continuous change favors a mindset where change is just something we have to deal with and not fight. A new mindset very much needed given the change we see ahead in how we develop, test, and deploy software!
My focus in this blog is testing, although the first change we need to get used to is that we can’t talk any longer about testing in an isolated fashion! Testing is getting more and more interconnected in a continuous feedback loop with development and deployment. (See my colleague Kurt Bittner's report on continuous delivery; I could not agree more with what Kurt says there!)
I just finished my new report on the Agile testing tools landscape. I’ll point Forrester readers to it as soon as it publishes. But there are few things that have struck me since I took over the software quality and testing research coverage at Forrester and which I would like to share with you in this preview of my findings of the testing tools landscape doc.
My research focus area was initially on software development life cycles (SDLCs) with a main focus on Agile and Lean. In fact, my main contribution in the past 12 months has been to the Forrester Agile and Lean playbook, where all my testing research has also focused. Among other reasons, I took the testing research area because testing was becoming more and more a discipline for software developers. So it all made sense for me to extend my software development research focus with testing. But I was not sure how deep testing was really going to integrate with development. My concern was that I’d have to spend too much time on the traditional testing standards, processes, and practices and little on new and more advanced development and testing practices. After 12 months, I am happy to say that it was the right bet! My published recent research shows the shift testing is making, and so does the testing tool landscape document, and here is why:
As I write this, I am in seat 1A of United flight 1607 from Philly to Houston. playing on the screen in front of me is CNBC. I make no secret of my disdain for much of the so called "news media" so I won't launch into my usual rant there (there are some superb journalists out there, but Murrow and Cronkite must be rolling in their graves!). I am bristling over the coverage right now that is focused on the 787's latest woes. As usual, the talking heads are clueless and painting a doomsday scenario for Boeing! It's a bunch of finance people who don't understand the engineering realities. They're smart bean counters, but not engineers. I am an old engineer, so let me shed light on what the Wall Street mouths don't know. There is an important lesson here for I&O leaders!
There is no doubt that Agile growth in the market is significant, and the growing daily number of inquiries I’ve been getting on Agile from end user organizations in 2012 gives me the impression that many are moving from tactical to strategic adoption. Why’s that? Many reasons, and you can read about them in our focused research on Agile transformation on the Forrester website. But I’d like to summarize the top five reasons from my recent research “Determine The Business And IT Impact Of Agile Development” :
Quality was the top — quite astonishing, but both the survey we ran across 205 Agile “professional adopters” and the interviews across some 21 organizations confirmed this. My read is that this is about functional quality.
Change was second to quality. We live in an era where innovation strives and organizations are continuously developing new apps and projects. But your business does not necessarily know what it needs or wants upfront. The business really appreciates the due-course changes that Agile development allows, as they enable the business to experiment and try out various options so it can become more confident about what is really right for the organization. Cutting edge cutting edge systems-of-engagement (Mobile, Web-facing, Social-media, etc) require lots of Change in due course.
There is growing evidence of a harmonic convergence of Infrastructure and Operations (I&O) with Security and it is hardly an accident. We often view them as separate worlds, but it’s obvious that they have more in common than they have differences. I live in the I&O team here at Forrester, but I get pulled into many discussions that would be classified as “security” topics. Examples include compliance analysis of configuration data and process discipline to prevent mistakes. Similarly, our Security analysts get pulled into process discussions and other topics that encroach into Operations territory. This is as it should be.
Some examples of where common DNA between I&O and Security can benefit you and your organization are:
Gain economic benefit by cross-pollinating skills, tools, and organizational entities
Improve service quality AND security with the same actions and strategies
Learn where the two SHOULD remain separate
Combine operational NOC and security SOC monitoring into a unified command center
Develop a plan and the economic and political justifications for intelligent combinations
My colleague and friend Mike Gualtieri wrote a really interesting blog the other day titled "Agile Software Is A Cop-Out; Here's What's Next." While I am not going to discuss the great conclusions and "next practices" of software (SW) development Mike suggests in that blog, I do want to focus on the assumption he makes about using working SW as a measurement of Agile.
I am currently researching that area and investigating how organizations actually measure the value of Agile SW development (business and IT value). And I am finding that, while organizations aim to deliver working SW, they also define value metrics to measure progress and much more:
Cycle time (e.g., from concept to production);
Business value (from number of times a feature is used by clients to impact on sales revenue, etc.);
Productivity metrics (such as burndown velocity, number of features deployed versus estimated); and last but not least
Quality metrics (such as defects per sprint/release, etc.).
Like many movements before it, IT is rapidly evolving to an industrial model. A process or profession becomes industrialized when it matures from an art form to a widespread, repeatable function with predictable result and accelerated by technology to achieve far higher levels of productivity. Results must be deterministic (trustworthy) and execution must be fast and nimble, two related but different qualities. Customer satisfaction need not be addressed directly because reliability and speed result in lower costs and higher satisfaction.
IT should learn from agriculture and manufacturing, which have perfected industrialization. In agriculture, productivity is orders of magnitude better. Genetic engineering made crops resistant to pests and environmental extremes such as droughts while simultaneously improving consistency. The industrialized evolution of farming means we can feed an expanding population with fewer farmers. It has benefits in nearly every facet of agricultural production.
Manufacturing process improvements like the assembly line and just-in-time manufacturing combined with automation and statistical quality control to ensure that we can make products faster and more consistently, at a lower cost. Most of the products we use could not exist without an industrialized model.