I met a client a while back who told me “I’m just getting back into working with Forrester again. Years ago, like 15 years ago, I used to have these great, animated conversations with an analyst there. John McCarthy. Do you know him?” This man remembered 15-year-old conversations. And having been in my share of meetings with John over the years I know why. He is one of my best friends, a great mentor and amazing analyst. His mind works so quickly, and he is knowledgeable about so many topics it’s almost impossible to keep up with him intellectually.
Why am I blogging about this story? Because this year John McCarthy celebrated his 25th anniversary with Forrester. He is in fact the first employee that George Colony hired. In an environment where people like to talk about rock star analysts, John McCarthy is John, Paul, George, and Ringo rolled into one (with good measures of Elvis and Johnny Cash too). He’s been on TV many times and has been the key spokesperson in his share of press conferences (notice the above photo of the Indian media interviewing him during a recent trip to that country), been featured on the front page of the Wall Street Journal with one of those rare sketched photos, and quoted on the West Wing. I’m not kidding – years ago John wrote a major piece of research titled “3.3 Million Jobs Go Offshore,” and a few months later it was referenced in the TV show. That’s influence.
John’s been an important part of the research and insight we’ve provided to Sourcing & Vendor Management clients over the years so I wanted to celebrate his anniversary with all of you in the community. There are a few ways we’re doing this and we encourage you to contribute too.
Cash-starved. Fast-paced. Understaffed. Late nights. T-shirts. Jeans.
These descriptors are just as relevant to emerging tech startups as they are to the typical enterprise IT infrastructure and operations (I&O) department. And to improve customer focus and develop new skills, I&O professionals should apply a “startup” mentality.
A few weeks ago, I had the opportunity to spend time with Locately, a four-person Boston-based startup putting a unique spin on customer insights and analytics: Location. By having consumers opt-in to Locately’s mobile application, media companies and brands can understand how their customers spend their time and where they go. Layered with other contextual information – such as purchases, time, and property identifiers (e.g. store names, train stops) – marketers and strategists can drive revenues and awareness, for example, by optimizing their marketing and advertising tactics or retail store placement.
The purpose of my visit to Locately was not to write this blog post, at least not initially. It was to give the team of five Research Associates that I manage exposure to a different type of technology organization than they usually have access to – the emerging tech startup. Throughout our discussion with Locately, it struck me that I&O organizations share a number of similarities with startups. In particular, here are two entrepreneurial characteristics that I&O professionals should embody in their own organizations:
Two months ago, we announced our upcoming Forrester Forrsights Software Survey, Q4 2010. Now the data is back from more than 2,400 respondents in North America and Europe and provides us with deep and sometimes surprising insights into the software market dynamics of today and the next 24 months.
We’d like to give you a sneak preview of interesting results around some of the most important trends in the software market: cloud computing integrated information technology, business intelligence, mobile strategy, and overall software budgets and buying preferences.
Companies Start To Invest More Into Innovation In 2011
After the recent recession, companies are starting to invest more in 2011, with 12% and 22% of companies planning to increase their software budgets by more than 10% or between 5% and 10%, respectively. At the same time, companies will invest a significant part of the additional budget into new solutions. While 50% of the total software budgets are still going into software operations and maintenance (Figure 1), this number has significantly dropped from 55% in 2010; spending on new software licenses will accordingly increase from 23% to 26% and custom-development budgets from 23% to 24% in 2011.
Cloud Computing Is Getting Serious
In this year’s survey, we have taken a much deeper look into companies’ strategies and plans around cloud computing besides simple adoption numbers. We have tested to what extent cloud computing makes its way from complementary services into business critical processes, replacing core applications and moving sensitive data into public clouds.
In the past few days, almost every conversation I have had with a CISO has somehow stumbled onto the topic of the data breach at the US Department of Defense (DoD) and subsequent release of that information through WikiLeaks. Many CISOs have told us that their executives are asking for reassurances that this type of large-scale data disclosure is not possible in their organization. Some executives have even asked the security team to provide presentations to management educating them on their existing security controls against similar attacks. Responding to these questions is tricky: “It’s like treading on a thin ice,” commented one CISO. If you tell them everything is under control you may create a false sense of security. If you tell them that it is very likely that such an incident can happen within their organization – it may be a career limiting move.
I would recommend giving the executives a dose of reality. I do many security assessments for our clients and often find that many organizations are solely relying too much on technology and infrastructure protections they have. Today’s reality is very different. We often operate in a global context with large and complex IT environments making it hard to monitor and track data and we are sharing a tremendous amount of sensitive information with business partners and third parties. All of these realities were faced by the US government as well and probably all contributed to the circumstances that led to the disclosure of data.
As many of you try to extract the lessons learned from this episode, here is my take on it – It is a failure of not a single security control but a set of multiple preventative and detective lapses.
Failure of preventative controls: Governance, Oversight and Access Control
For the second year in a row, Forrester Research has targeted master data management (MDM) as one of the highest-impact technologies that enterprise architects must keep an eye on. Forrester Vice President and Principal Analyst Gene Leganza published “The Top 15 Technology Trends EA Should Watch: 2011 To 2013” research in October, and Gene smartly positions MDM along with next-gen business intelligence, advanced text and social analytics, and information-as-a-service integration architectures as key enablers to deliver what Forrester is calling “process-centric data and intelligence”.
Data governance is not – and should never have been – about the data. High-quality and trustworthy data sitting in some repository somewhere does not in fact increase revenue, reduce risk, improve operational efficiencies, or strategically differentiate any organization from its competitors. It’s only when this trusted data can be delivered and consumed within the most critical business processes and decisions that run your business that these business outcomes can become reality. So what is data governance all about? It’s all about business process, of course.
Just posted an OpEd piece on IT's role in supporting the Splinternet. The Splinternet is a lot like the Internet except that it's fragmented by devices and passwords (and media formats and screen sizes and location). Customers don't get a single experience across mobile, social, and Web channels today. But they need to. Marketing is scrambling to give customers the mobile apps and social engagement they desire, scambling to overcome the Splinternet. But marketing can't do it alone.
The most digitally advanced firms and organizations on the planet realize that they need a whole-company response (and that includes all of IT as well as customer service, sales, and product development; supported by finance and legal and ops) already and are investing to deal with the Splinternet. (ESPN, NPR, Amazon, Google, and Bank of America come to mind.)
I won't repeat the article here, but I will point out that IT has a choice to make. It starts with a logic argument:
Customers expect a single experience across the Web, mobile, and social channels.
IT is the only part of the organization that can stitch together all of the systems across all of the channels to deliver that single experience.
Therefore, IT needs to step up and confront the challenges and opportunities presented by the Splinternet.
Therefore, IT must work even more closely with marketing, sales, customer service, and product development.
I just spent some time talking to ScaleMP, an interesting niche player that provides a server virtualization solution. What is interesting about ScaleMP is that rather than splitting a single physical server into multiple VMs, they are the only successful offering (to the best of my knowledge) that allows I&O groups to scale up a collection of smaller servers to work as a larger SMP.
Others have tried and failed to deliver this kind of solution, but ScaleMP seems to have actually succeeded, with a claimed 200 customers and expectations of somewhere between 250 and 300 next year.
Their vSMP product comes in two flavors, one that allows a cluster of machines to look like a single system for purposes of management and maintenance while still running as independent cluster nodes, and one that glues the member systems together to appear as a single monolithic SMP.
Does it work? I haven’t been able to verify their claims with actual customers, but they have been selling for about five years, claim over 200 accounts, with a couple of dozen publicly referenced. All in all, probably too elaborate a front to maintain if there was really nothing there. The background of the principals and the technical details they were willing to share convinced me that they have a deep understanding of the low-level memory management, prefectching, and caching that would be needed to make a collection of systems function effectively as a single system image. Their smaller scale benchmarks displayed good scalability in the range of 4 – 8 systems, well short of their theoretical limits.
My quick take is that the software works, and bears investigation if you have an application that:
Either is certified to run with ScaleMP (not many), or one where that you control the code.
You understand the memory reference patterns of the application, and
It’s rumored that the Ford Model T’s track dimension (the distance between the wheels of the same axle) could be traced from the Conestoga wagon to the Roman chariot by the ruts they created. Roman roads forced European coachbuilders to adapt their wagons to the Roman chariot track, a measurement they carried over when building wagons in America in the 19th and early 20th centuries. It’s said that Ford had no choice but to adapt his cars to the rural environment created by these wagons. This cycle was finally broken by paving the roads and freeing the car from the chariot legacy.
IT has also carried over a long legacy of habits and processes that contrast with the advanced technology that it uses. While many IT organizations are happy to manage 20 servers per administrator, some Internet service providers are managing 1 or 2 million servers and achieving ratios of 1 administrator per 2000 servers. The problem is not how to use the cloud to gain 80% savings in data center costs, the problem is how to multiply IT organizations’ productivity by a factor of 100. In other words, don’t try the Model T approach of adapting the car to the old roads; think about building new roads so you can take full advantage of the new technology.
Gains in productivity come from technology improvements and economy of scale. The economy of scale is what the cloud is all about: cookie cutter servers using virtualization as a computing platform, for example. The technology advancement that paves the road to economy of scale is automation. Automation is what will abstract diversity and mask the management differences between proprietary and commodity platforms and eventually make the economy of scale possible.
Having just finished the dynamic case management Forrester Wave™ — it will probably appear in mid-January — I was struck by the variation in the approaches between the vendors; especially how they represent the organization, and the variety of wrinkles associated with work assignment. This was not so much related to an individual case management vendor, but it became apparent when you looked across the products. And that got me thinking and discussing with colleagues, customers, and vendors around the challenges of realistically supporting the organization as it looks toward BPM generally. Of course, there are many different issues, but the one I want to focus on here is around organizational structures, roles, skills, and responsibilities.
The central issue I want to highlight is one that many folks just do not see coming in their BPMS and dynamic case management implementations. Very often, there is only a loose concept of “role” within an organization. When the word “role” is used, it is usually equated to an existing job title (part of the organization structure), rather than responsibility (at least initially). It is further complicated by the fact that within a given job title, there are usually wide variations in the skills and expertise levels of those who work in that area. And while this is not a problem where people manually coordinate their work, when it comes to automating work routing (to the most appropriate person to deal with a given work item or case), there are often major complications.
What is the definition of an "application"? We are "applications development and delivery professionals" - surely we have this question nailed, don't we? The question keeps coming up in different contexts, and since there are many potential opinions, a blog is the perfect place to spur debate. Here are some (simplistic) questions to generate debate:
Is a Web page an application?
If not, how many Web pages does it take until I consider it an application - 10, 100, 1,000?
Does size matter? (Please behave yourselves with this one.)
Is the size of the code base a pertinent factor?
What about SharePoint sites, Access databases, and spreadsheets? Are they applications?
Where do COTS and packaged apps fit?
Does the technology I use affect the definition?
If I use a scripting language for a quick-and-dirty task, is that an application?
Does SOA erode the definition of an application?
Do we cease thinking about applications as entities and think about them more as containers that hold collections of SOA services?
How does open source affect the definition?
How does my role affect my perception of an application?
Do developers and users use similar definitions?
I have my opinions - in fact I just finished a draft piece of research on it that will be published in January, but what are your opinions?