I am just back from the CA World 2015 in Las Vegas, where everything was cool: from the weather, with unexpected but welcomed temperatures in the low 50s; to the event theme, with a strong focus on Agile, DevOps, APIs, and security; to Fall Out Boys and Sheryl Crow’s concerts.
As digital pervades all industries, and software becomes the brand, CA Technologies, which has traditionally had a stronger focus in the IT operations or “Ops” world, is making huge efforts to conquer the hearts and minds of the developers of large-scale development shops, or the “Dev”world. No doubt CA has been building a stronger DevOps in the last few years. Its goal is to partner in a larger industry ecosystem and be better positioned to serve the many organizations that are struggling to scale Agile and consistently build better applications faster. To make a stronger play in the Agile and Dev side of DevOps, CA made two brilliant acquisitions in 2015 which CEO Mike Gregoire highlighted in opening session of CA World: Rally Software, a leader in Agile project management at Scale, and Grid-Tools, a leader in Agile test data management and test optimization and automation.
With its revamped Dev strategy, CA aims to enter the Olympus of those large software and enterprise companies that have moved thousands of internal developers, testers, operations pros, and even managers to Agile and DevOps. With this transformation, CA will position itself to better serve current and future clients’ new needs to develop more software at speed. While CA started this transition much later than its competitors like IBM, Microsoft, HP, and other large software players (and even traditional end user enterprises), we recognize it’s still in time!
A few months ago, I blogged about testing quality@speed in the same way that F1 racing teams do to win races and fans. Last week, I published my F(TA)1 Forrester Wave! It examines the capabilities of nine vendors to evaluate how they support Agile development and continuous delivery teams when it comes to continuous testing: Borland, CA Technologies, HP, IBM, Microsoft, Parasoft, SmartBear, TestPlant, and Tricentis. However, only Forrester clients can attend “the race” to see the leaders.
The market overview section of our evaluation complements the analysis in the underlying model by looking at other providers that either augment FTA capabilities, play in a different market segment, or did not meet one of the criteria for inclusion in the Forrester Wave. These include: 1) open source tools like Selenium and Sahi, 2) test case design and automation tools like Grid-Tools Agile Designer, and 3) other tools, such as Original Software, which mostly focuses on graphical user interface (GUI) and packaged apps testing, and Qualitia and Applitools, which focus on GUI and visualization testing.
We deliberately weighted the Forrester Wave criteria more heavily towards “beyond GUI” and API testing approaches. Why? Because:
Software delivery leaders are under tremendous pressure to deliver faster and better software. As digital is having a ripple effect, the scope of improving software delivery processes and practices is just becoming pervasive in all industries and in many enterprises.
Agile development has been around for well over 10 years, DevOps is picking up where Agile left [I see it differently, the DevOps push is just giving more momentum to "Agile all the way through" journeys and bringing more agility to the last mile of the delivery processes too], and many organizations are still in the middle of an Agile transformation journey. So it's a good time to do another industry health check on Agile and therefore run our bi-annual Forrester Agile adoption survey for 2015.
Who should take the survey? Anyone who is currently on an Agile adoption journey, from beginners to advanced practicioners, from those going from small scale Agile adoption to large scale Agile adoption, who can share with us their company, division or team experience. More specifically
Software Product Vendors, System integrators and Consultants
End user companies in any vertical sector: automative, engineering, energy, finance, government, retail, media, etc.
Located in any geography (we are adding this new demographic data point this year)
What’s taken artificial intelligence (AI) so long? We invented AI capabilities like first-order logical reasoning, natural-language processing, speech/voice/vision recognition, neural networks, machine-learning algorithms, and expert systems more than 30 years ago, but aside from a few marginal applications in business systems, AI hasn’t made much of a difference. The business doesn’t understand how or why it could make a difference; it thinks we can program anything, which is almost true. But there’s one thing we fail at programming: our own brain — we simply don’t know how it works.
What’s changed now? While some AI research still tries to simulate our brain or certain regions of it — and is frankly unlikely to deliver concrete results anytime soon — most of it now leverages a less human, but more effective, approach revolving around machine learning and smart integration with other AI capabilities.
What is machine learning? Simply put, sophisticated software algorithms that learn to do something on their own by repeated training using big data. In fact, big data is what’s making the difference in machine learning, along with great improvements in many of the above AI disciplines (see the AI market overview that I coauthored with Mike Gualtieri and Michele Goetz on why AI is better and consumable today). As a result, AI is undergoing a renaissance, developing new “cognitive” capabilities to help in our daily lives.
Formula One has gotten us all used to amazing speed. In as little as three seconds, F1 pit teams replace all four wheels on a car and even load in dozens of liters of fuel. Pit stops are no longer an impediment to success in F1 — but they can be differentiating to the point where teams that are good at it win and those that aren’t lose.
It turns out that pit stops not only affect speed; they also maintain and improve quality. In fact, prestigious teams like Ferrari, Mercedes-Benz, and Red Bull use pit stops to (usually!) prevent bad things from happening to their cars. In other words, pit stops are now a strategic component of any F1 racing strategy; they enhance speed with quality. But F1 teams also continuously test the condition of their cars and external conditions that might influence the race.
My question: Why can’t we do the same with software delivery? Can fast testing pit stops help? Today, in the age of the customer, delivery teams face a challenge like none before: a business need for unprecedented speed with quality — quality@speed. Release cycle times are plummeting from years to months, weeks, or even seconds — as companies like Amazon, Netflix, and Google prove.
The modern business world echoes with the sound of time-tested business models being shattered by digital upstarts, while the rate of disruption is accelerating. Organizations that will win in this world must hone their ability to deliver high-value experiences, based on high quality software with very short refresh cycles. Customers are driving this shift; every experience raises their expectations and their choices are no longer limited. Like trust, loyalty takes years to build and only a moment to lose. The threat is existential: Organizations need to drive innovation and disrupt their competitors or they will cease to exist.
I am just back from the first ever Cognitive Computing Forum organized by DATAVERSITY in San Jose, California. I am not new to artificial intelligence (AI), and was a software developer in the early days of AI when I was just out of university. Back then, if you worked in AI, you would be called a SW Knowledge Engineer, and you would use symbolic programming (LISP) and first order logic programming (Prolog) or predicate calculus (MRS) to develop “intelligent” programs. Lot’s of research was done on knowledge representation and tools to support knowledge based engineers in developing applications that by nature required heuristic problem solving. Heuristics are necessary when problems are undefined, non-linear and complex. Deciding which financial product you should buy based on your risk tolerance, amount you are willing to invest, and personal objectives is a typical problem we used to solve with AI.
Fast forward 25 years, and AI is back, has a new name, it is now called cognitive computing. An old friend of mine, who’s never left the field, says, “AI has never really gone away, but has undergone some major fundamental changes.” Perhaps it never really went away from labs, research and very nich business areas. The change, however, is heavily about the context: hardware and software scale related constraints are gone, and there’s tons of data/knowledge digitally available (ironically AI missed big data 25 years ago!). But this is not what I want to focus on.
Our bi-yearly Forrester Agile survey suggests that Agile development (or simply "Agile") continues to see consistent, strong adoption. However, the same survey data shows that only a small percentage of firms are outsourcing Agile application development due to a lack of experience with the development sourcing approaches and governance models needed to make it work. Successfully outsourcing Agile development, either fully or partially, involves redefining roles and responsibilities, change management processes, metrics and SLAs, service descriptions, and other contractual elements. Merely using traditional outsourcing language and practices risks jeopardizing the benefits of Agile. There is no single way of doing this right.
When computers were invented 60 years ago, nobody would have thought that gazillions of 0 and 1s would soon rule the world. After all, that’s all there is in any computer memory, be it a laptop, a mobile phone, or a supercomputer like Watson; if you could open memory up and visualize the smallest elementary unit, you would “see” only an infinite sequence of 0s and 1s, something that would look like this:
Interestingly, that has not changed. Computers are still processing 1s and 0s. What has changed is that we live in an age of digital disruption, an age where software applications run and rule our business more and more. To be successful, those applications need to be engaging and entertaining so that consumers enjoy and are delighted by them; they also have to be mobile and accessible anywhere and at anytime, and they have to leverage tons of information, no matter if it comes from a database, a tweet, or Facebook.
I hear people talking about Agile 2.0 a lot. But when I look at what’s happening in the application development and delivery space, I see that many organizations are just now starting to experience Agile’s true benefits, and they’re not yet leveraging those benefits completely or consistently. So let’s stop talking about Agile 2.0 for a moment and instead digest and operationalize what’ve learned so far. There’s plenty to improve upon without getting into inventing new practices and acronyms to add to the Agile transformation backlog!
What I see is that app-dev leaders want to understand how they can optimize existing use of AD&D Agile practices like Scrum, XP, Kanban, improve the practices around the more advanced ones like TDD, continuous testing, CI and CD and leverage all with what they’ve learned over the years (including waterfall). Scaling the whole thing up in their organization in order to have a bigger and more consistent impact on the business is what their next key goal is. We fielded the 2013 version of our Global Agile Software Application Development Online Survey to find out how. I present and analyze this data in my latest report. The survey addressed common questions that clients ask me frequently get in inquiries and advisory, such as:
How can we test in a fast-paced environment while maintaining or improving quality?
How can we improve our Agile sourcing patterns to work effectively with partners?