Early this year, on January 15, I published our first research on testing for the Agile and Lean playbook. Connected to that research, my colleague Margo Visitacion and I also published a self-assessment testing toolkit. The toolkit helps app-dev and testing leaders understand how mature their current testing practices and organization are for Agile and Lean development.
The Agile Testing Self-Assessment Toolkit
So what are the necessary elements to assess Agile testing maturity? Looking to compromise between simplicity and comprehensiveness, we focused on the following:
Testing team behavior. Some of the questions we ask here look at collaboration around testing among all roles in the Scrum teams. We also ask about unit testing: Is it a mandatory task for developers? Are all of the repeititive tests that can be run over and over at each regression testing automated?
Organization. In our earlier Agile testing research, we noticed a change in the way testing gets organized when Agile is being adopted. So here we look at the role test managers are playing: Are they focusing more on being coaches and change agents to accelerate adoption of the new Agile testing practices? Or are managers still operating in a command-and-control regime? Is the number of manual testers decreasing? Are testing centers of excellence (TCOEs) shifting to become testing practice centers of excellence (TPCOEs)?
DevOps is a movement for developers and operations professionals that encourages more collaboration and release automation. Why? To keep up with the faster application delivery pace of Agile. In fact, with Agile, as development teams deliver faster and in shorter cycles, IT operations finds itself unprepared to keep up with the new pace. For operations teams, managing a continuous stream of software delivery with traditional manual-based processes is Mission Impossible. Vendors have responded to DevOps requirements with more automation in their release management, delivery, and deployment tools. However, there is a key process that sits between development and operations that seems to have been given little attention: testing.
In fact, some key testing activities, like integration testing and end-to-end performance testing, are caught right in the middle of the handover process between development and operations. In the Agile and Lean playbook, I’ve dedicated my latest research precisely to Agile testing, because I’ve seen testing as the black beast in many transformations to Agile because it was initially ignored.
More and more data is stored online by both consumers and businesses. The convenience of using services such as Dropbox, Box, Google Drive, Microsoft Live Skydrive, and SugarSync is indisputable. But, is it safe? All of the services certainly require a user password to access folders, and some of the services even encrypt the stored files. Dropbox reassures customers, "Other Dropbox users can't see your private files in Dropbox unless you deliberately invite them or put them in your Public folder."
The security measures employed by these file-synching and sharing services are all well and good, but they can be instantly, innocently neutered by a distracted programmer. Goodbye privacy. All your personal files, customer lists, business plans, and top-secret product designs become available for all the world to see. How can this happen even though these services are sophisticated authetication and encryption technologies? The answer: a careless bug introduced in the code.
Below is some Java code I wrote for a fictitious file-sharing service called CloudCabinet to demonstrate how this can happen. Imagine a distracted programmer texting her girlfriend on her iPhone while cutting and pasting Java code. Even non-Java programmers should be able to find the error in the code below.
The essential shape of the enterprise marketing landscape hasn’t changed much over the years. In last week’s Revisiting The Enterprise Marketing Software Landscape, I dissect technologies into the four basic categories of marketing management, brand management, relationship marketing, and interactive marketing. Consumers are rapidly changing behaviors, and marketing as a practice is evolving dramatically, but the technologies that marketers buy continue to come in essentially the same containers.
Notice, however, all of the decision management systems employed across the marketing landscape. From interaction management to online testing to recommendations to contact optimization, marketers are using automated systems to make an increasing number of customer-facing decisions. Viewed from the perspective of those decisions, the landscape of marketing technologies is shifting under our feet.
So is it time for a new take – say, customer decision management (CDM) – on marketing technology?
Security threats develop and evolve with startling rapidity, with the attackers always seeking to stay one step ahead of the S&R professional. The agility of our aggressors is understandable; they do not have the same service-focused restrictions that most organizations have, and they seek to find and exploit individual weaknesses in the vast sea of interconnecting technology that is our computing infrastructure.
If we are to stand a chance of breaking even in this game, we have to learn our lessons and ensure that we don’t repeat the same mistakes over and over. Unfortunately, it is alarmingly common to see well known vulnerabilities and weakness being baked right in to new applications and systems – just as if the past 5 years had never happened!
A recent report released by Alex Hopkins of Context Information Security shines a light on the vulnerabilities they discovered while testing almost 600 pre-release web applications during 2011. The headlines for me were:
On average, the number of issues discovered per application is on the rise.
Two-thirds of web applications were affected by cross site scripting (XSS).
Nearly one in five web applications were vulnerable to SQL injection.
It makes depressing reading, but I’m interested in why this situation is occurring:
Are S&R professionals simply not educating and guiding application developers?
Are application developers ignoring the training and education? Are we teaching them the wrong things or do we struggle to explain the threats from XSS and SQL injection?
Are our internal testing regimes failing, allowing flawed code to reach release candidate stage?
Seriously. I recently spoke with a client who swears that software quality improved once they got rid of the QA team. Instead of making QA responsible for quality, they put the responsibility squarely on the backs of the developers producing the software. This seems to go against conventional wisdom about quality software and developers: Don't trust developers. Or, borrowing from Ronald Reagan, trust but verify.
This client is no slouch, either. The applications provide real-time market data for financial markets, and the client does more than 40 software releases per year. If the market data produced by an application were unavailable or inaccurate, then the financial market it serves would crumble. Availability and accuracy of information are absolute. This app can't go down, and it can't be wrong.
Why Does This Work?
The client said that this works because the developers know that they have 100% responsibility for the application. If it doesn't work, the developers can't say that "QA didn't catch the problem." There is no QA team to blame. The buck stops with the application development team. They better get it right, or heads will roll.
As British author Samuel Johnson famously put it, "The prospect of being hanged focuses the mind wonderfully."
Recently, I’ve been getting more inquiries around risk based testing. In addition to agile test methods and test estimation, test teams turning their eyes to risk based testing is just another positive step in integrating quality through out the SDLC. Yes, I still see QA engineers as having to put their evangelist hats on to educate their developer brothers and sisters that quality is more than just testing (don’t get me wrong, consistent unit and integration testing is a beautiful thing), however, any time that business and technology partners can think about impact and dependencies in their approach to a solid, workable application elevates quality to the next level.
Keep asking those questions about risk based testing – and make sure that you’re covering all of the angles. Make sure that you’re covering: