In the latter half of last year, I started researching mobile application testing tools. My research focused, so far, on functional testing, primarily around mobile app front-end testing. As I began the research, it became clear that the automation capabilities testers needed to validate app UIs was there, but application development and delivery teams felt that device labs were too expensive to be practical. During the research for the Vendor Landscape: Front-End Mobile Testing Tools report, we expected that device labs would be a differentiator among products only to discover that most of the major mobile testing solutions provide them in one way or another. There are differences between vendors when it comes to the flexibility, configurability, and management of their device lab offerings, but if you’re delivering customer-facing mobile apps you can do much of your testing on physical devices (our recommended method).
In earlier reports, we recommended that, because of the cost of on-device testing, development organizations focused their testing efforts on the most important aspect of their apps, letting users find issues in less popular areas of the app for them. With most of the major mobile testing vendors offering device labs plus Amazon and Google’s entry into the device cloud space, competition will drive down cost and make on-device testing the more common option for mobile app testing. Microsoft’s acquisition of Xamarin now gives Microsoft a robust and capable device lab, stuffed with a variety of Android and iOS devices, which adds to the competition in this space as well.
In 2014, Michael Facemire and Rowan Curran published a report entitled A Benchmark To Drive Mobile Test Quality. The report covered how organizations had to adjust mobile app testing in a world with an overabundance of mobile devices and applications in constant enhancement mode (very frequent updates). As the new guy, I was asked to do an update to the report, so I reviewed what already existed and proceeded to do some research to see what had changed in the market since the original report was published. Well, my update to the report is called Improving Mobile App Quality Testing and it was just published today.
Later this month, I will be completing reports on HTTP/2 as well as an update to Building High-Performance Mobile Experiences, a report by Jeffrey Hammond and Michael Facemire.
A few months ago, I blogged about testing quality@speed in the same way that F1 racing teams do to win races and fans. Last week, I published my F(TA)1 Forrester Wave! It examines the capabilities of nine vendors to evaluate how they support Agile development and continuous delivery teams when it comes to continuous testing: Borland, CA Technologies, HP, IBM, Microsoft, Parasoft, SmartBear, TestPlant, and Tricentis. However, only Forrester clients can attend “the race” to see the leaders.
The market overview section of our evaluation complements the analysis in the underlying model by looking at other providers that either augment FTA capabilities, play in a different market segment, or did not meet one of the criteria for inclusion in the Forrester Wave. These include: 1) open source tools like Selenium and Sahi, 2) test case design and automation tools like Grid-Tools Agile Designer, and 3) other tools, such as Original Software, which mostly focuses on graphical user interface (GUI) and packaged apps testing, and Qualitia and Applitools, which focus on GUI and visualization testing.
We deliberately weighted the Forrester Wave criteria more heavily towards “beyond GUI” and API testing approaches. Why? Because:
Formula One has gotten us all used to amazing speed. In as little as three seconds, F1 pit teams replace all four wheels on a car and even load in dozens of liters of fuel. Pit stops are no longer an impediment to success in F1 — but they can be differentiating to the point where teams that are good at it win and those that aren’t lose.
It turns out that pit stops not only affect speed; they also maintain and improve quality. In fact, prestigious teams like Ferrari, Mercedes-Benz, and Red Bull use pit stops to (usually!) prevent bad things from happening to their cars. In other words, pit stops are now a strategic component of any F1 racing strategy; they enhance speed with quality. But F1 teams also continuously test the condition of their cars and external conditions that might influence the race.
My question: Why can’t we do the same with software delivery? Can fast testing pit stops help? Today, in the age of the customer, delivery teams face a challenge like none before: a business need for unprecedented speed with quality — quality@speed. Release cycle times are plummeting from years to months, weeks, or even seconds — as companies like Amazon, Netflix, and Google prove.