As firms face growing competition for customers, they naturally seek to compare themselves with their peers and competitors, but there is a trap: Leaders don’t compare themselves with competitors anymore. Instead, they compare their current performance with where they need to be as a leader, and that’s what the business expects.
In the past, it was common to benchmark organizational performance against “industry averages,” and being “above average” was considered good. Today, “above average” is no longer good enough; fickle customers demand exceptional experiences. Delivering those experiences requires exceptional performance; anything less means that another company may steal your customers.
Benchmarking is for followers, not leaders. Organizations want to be “unicorns,” like the Etsys, Netflixes, Googles, and Salesforces of the world. They don’t want to be losing “horses.”
Most benchmarking approaches target the IT of the past, not BT. Benchmark methodologies and data were created and heavily used when software delivery capability was considered a cost, not a differentiator. In business technology, software is a key differentiator, and BT leaders want to be the best and continuously improve.
Modern application delivery leaders realize that their primary goal is to deliver value to the business and its customers faster. Most of the modern successful change frameworks, like Agile (in its various instantiations), Lean, and Lean Startup, which inspire developers and development shops, put metrics and measurement at the center of improvement and feedback loops. The objective of controlling and governing projects to meet vaguely estimated efforts but precisely defined budgets as well as unrealistic deadlines is no longer on the agenda of leading BT organizations.
The new objective of BT organizations is to connect more linearly the work that app dev teams do and the results they produce to deliver business outcomes. In this context, application development and delivery (AD&D) leaders need a new set of metrics that help them monitor and improve the value they deliver, based on feedback from business partners and customers.
Preproduction metrics. Leading organizations capture preproduction data on activities and milestones through productivity metrics, but they place a growing emphasis on the predictability of the continuous delivery pipeline, quality, and value.
I am just back from the CA World 2015 in Las Vegas, where everything was cool: from the weather, with unexpected but welcomed temperatures in the low 50s; to the event theme, with a strong focus on Agile, DevOps, APIs, and security; to Fall Out Boys and Sheryl Crow’s concerts.
As digital pervades all industries, and software becomes the brand, CA Technologies, which has traditionally had a stronger focus in the IT operations or “Ops” world, is making huge efforts to conquer the hearts and minds of the developers of large-scale development shops, or the “Dev”world. No doubt CA has been building a stronger DevOps in the last few years. Its goal is to partner in a larger industry ecosystem and be better positioned to serve the many organizations that are struggling to scale Agile and consistently build better applications faster. To make a stronger play in the Agile and Dev side of DevOps, CA made two brilliant acquisitions in 2015 which CEO Mike Gregoire highlighted in opening session of CA World: Rally Software, a leader in Agile project management at Scale, and Grid-Tools, a leader in Agile test data management and test optimization and automation.
With its revamped Dev strategy, CA aims to enter the Olympus of those large software and enterprise companies that have moved thousands of internal developers, testers, operations pros, and even managers to Agile and DevOps. With this transformation, CA will position itself to better serve current and future clients’ new needs to develop more software at speed. While CA started this transition much later than its competitors like IBM, Microsoft, HP, and other large software players (and even traditional end user enterprises), we recognize it’s still in time!
A few months ago, I blogged about testing quality@speed in the same way that F1 racing teams do to win races and fans. Last week, I published my F(TA)1 Forrester Wave! It examines the capabilities of nine vendors to evaluate how they support Agile development and continuous delivery teams when it comes to continuous testing: Borland, CA Technologies, HP, IBM, Microsoft, Parasoft, SmartBear, TestPlant, and Tricentis. However, only Forrester clients can attend “the race” to see the leaders.
The market overview section of our evaluation complements the analysis in the underlying model by looking at other providers that either augment FTA capabilities, play in a different market segment, or did not meet one of the criteria for inclusion in the Forrester Wave. These include: 1) open source tools like Selenium and Sahi, 2) test case design and automation tools like Grid-Tools Agile Designer, and 3) other tools, such as Original Software, which mostly focuses on graphical user interface (GUI) and packaged apps testing, and Qualitia and Applitools, which focus on GUI and visualization testing.
We deliberately weighted the Forrester Wave criteria more heavily towards “beyond GUI” and API testing approaches. Why? Because:
Software delivery leaders are under tremendous pressure to deliver faster and better software. As digital is having a ripple effect, the scope of improving software delivery processes and practices is just becoming pervasive in all industries and in many enterprises.
Agile development has been around for well over 10 years, DevOps is picking up where Agile left [I see it differently, the DevOps push is just giving more momentum to "Agile all the way through" journeys and bringing more agility to the last mile of the delivery processes too], and many organizations are still in the middle of an Agile transformation journey. So it's a good time to do another industry health check on Agile and therefore run our bi-annual Forrester Agile adoption survey for 2015.
Who should take the survey? Anyone who is currently on an Agile adoption journey, from beginners to advanced practicioners, from those going from small scale Agile adoption to large scale Agile adoption, who can share with us their company, division or team experience. More specifically
Software Product Vendors, System integrators and Consultants
End user companies in any vertical sector: automative, engineering, energy, finance, government, retail, media, etc.
Located in any geography (we are adding this new demographic data point this year)
What’s taken artificial intelligence (AI) so long? We invented AI capabilities like first-order logical reasoning, natural-language processing, speech/voice/vision recognition, neural networks, machine-learning algorithms, and expert systems more than 30 years ago, but aside from a few marginal applications in business systems, AI hasn’t made much of a difference. The business doesn’t understand how or why it could make a difference; it thinks we can program anything, which is almost true. But there’s one thing we fail at programming: our own brain — we simply don’t know how it works.
What’s changed now? While some AI research still tries to simulate our brain or certain regions of it — and is frankly unlikely to deliver concrete results anytime soon — most of it now leverages a less human, but more effective, approach revolving around machine learning and smart integration with other AI capabilities.
What is machine learning? Simply put, sophisticated software algorithms that learn to do something on their own by repeated training using big data. In fact, big data is what’s making the difference in machine learning, along with great improvements in many of the above AI disciplines (see the AI market overview that I coauthored with Mike Gualtieri and Michele Goetz on why AI is better and consumable today). As a result, AI is undergoing a renaissance, developing new “cognitive” capabilities to help in our daily lives.
Formula One has gotten us all used to amazing speed. In as little as three seconds, F1 pit teams replace all four wheels on a car and even load in dozens of liters of fuel. Pit stops are no longer an impediment to success in F1 — but they can be differentiating to the point where teams that are good at it win and those that aren’t lose.
It turns out that pit stops not only affect speed; they also maintain and improve quality. In fact, prestigious teams like Ferrari, Mercedes-Benz, and Red Bull use pit stops to (usually!) prevent bad things from happening to their cars. In other words, pit stops are now a strategic component of any F1 racing strategy; they enhance speed with quality. But F1 teams also continuously test the condition of their cars and external conditions that might influence the race.
My question: Why can’t we do the same with software delivery? Can fast testing pit stops help? Today, in the age of the customer, delivery teams face a challenge like none before: a business need for unprecedented speed with quality — quality@speed. Release cycle times are plummeting from years to months, weeks, or even seconds — as companies like Amazon, Netflix, and Google prove.
The modern business world echoes with the sound of time-tested business models being shattered by digital upstarts, while the rate of disruption is accelerating. Organizations that will win in this world must hone their ability to deliver high-value experiences, based on high quality software with very short refresh cycles. Customers are driving this shift; every experience raises their expectations and their choices are no longer limited. Like trust, loyalty takes years to build and only a moment to lose. The threat is existential: Organizations need to drive innovation and disrupt their competitors or they will cease to exist.
I am just back from the first ever Cognitive Computing Forum organized by DATAVERSITY in San Jose, California. I am not new to artificial intelligence (AI), and was a software developer in the early days of AI when I was just out of university. Back then, if you worked in AI, you would be called a SW Knowledge Engineer, and you would use symbolic programming (LISP) and first order logic programming (Prolog) or predicate calculus (MRS) to develop “intelligent” programs. Lot’s of research was done on knowledge representation and tools to support knowledge based engineers in developing applications that by nature required heuristic problem solving. Heuristics are necessary when problems are undefined, non-linear and complex. Deciding which financial product you should buy based on your risk tolerance, amount you are willing to invest, and personal objectives is a typical problem we used to solve with AI.
Fast forward 25 years, and AI is back, has a new name, it is now called cognitive computing. An old friend of mine, who’s never left the field, says, “AI has never really gone away, but has undergone some major fundamental changes.” Perhaps it never really went away from labs, research and very nich business areas. The change, however, is heavily about the context: hardware and software scale related constraints are gone, and there’s tons of data/knowledge digitally available (ironically AI missed big data 25 years ago!). But this is not what I want to focus on.
Our bi-yearly Forrester Agile survey suggests that Agile development (or simply "Agile") continues to see consistent, strong adoption. However, the same survey data shows that only a small percentage of firms are outsourcing Agile application development due to a lack of experience with the development sourcing approaches and governance models needed to make it work. Successfully outsourcing Agile development, either fully or partially, involves redefining roles and responsibilities, change management processes, metrics and SLAs, service descriptions, and other contractual elements. Merely using traditional outsourcing language and practices risks jeopardizing the benefits of Agile. There is no single way of doing this right.