My colleagues and I have just completed yet another engagement with a large client — one of dozens recently — who was facing a to be or not to be decision: whether to move its BI platform and applications to the cloud. It’s a very typical question that our clients are asking these days, mainly for the following two reasons:
In many cases, their current on-premises BI solutions are too inflexible to support the business now, much less in the future.
The relative success of cloud-based CRM (SFDC and others) solutions may indicate that cloud offers a better alternative.
These clients put these two statements together and make the reasonable assumption that cloud BI will solve many of the current BI challenges that cloud-based CRM solved. Reasonable? Yes. Correct? Not so fast — the only correct answer is “It depends.”
Let’s take a couple of steps back. First, let’s define applications or packaged solutions vs. platforms (because BI requires both).
Subscribe to a solution-like CRM
Provide standard business functions to all customers (which makes it different from “hosting;” see below)
Difficult to tailor to specific needs
Usually are used synonymously (but incorrectly, see below) with software-as-a-service (SaaS)
Platforms for building solutions
Subscribe to tools and resources to build solutions like CRM
Provide standard technical functions to developers
Contain limited, if any, business application functionality
Usually labeled either as platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS).
It seems everyone’s obsessed with Facebook’s IPO right now. And while CMOs are beginning to understand the possibilities of Facebook, and other social technologies, to connect and engage with customers, many CIOs remain unclear on the value of Facebook.
A question many business executives ask is this: “What’s the value of having someone like your page?”
On its own, maybe not much. But the true potential lies in the ability to collect insights about the people who like brands, products or services – be it your own or someone else’s.
For example, the chart below shows the percentage of consumers by age group who have “liked” Pepsi or Coca-Cola. These data suggest Coca-Cola is significantly more popular with 17-28 year olds than Pepsi, while Pepsi appears more popular with the 36-70 crowd. I pulled these data points directly from the Facebook likes of each of the brand pages using a free consumer tool from MicroStrategy called Wisdom. Using this tool I can even tell that Coca-Cola fans are likely to also enjoy the odd Oreo cookie and bag of Pringles.
As one of the industry-renowned data visualization experts Edward Tufte once said, “The world is complex, dynamic, multidimensional; the paper is static, flat. How are we to represent the rich visual world of experience and measurement on mere flatland?” There’s indeed just too much information out there to be effectively analyzed by all categories of knowledge workers. More often than not, traditional tabular row-and-column reports do not paint the whole picture or — even worse — can lead an analyst to a wrong conclusion. There are multiple reasons to use data visualization; the three main ones are that one:
Cannot see a pattern without data visualization. Simply seeing numbers on a grid often does not tell the whole story; in the worst case, it can even lead one to a wrong conclusion. This is best demonstrated by Anscombe’s quartet, where four seemingly similar groups of x and y coordinates reveal very different patterns when represented in a graph.
Cannot fit all of the necessary data points onto a single screen. Even with the smallest reasonably readable font, single line spacing, and no grid, one cannot realistically fit more than a few thousand data points using numerical information only. When using advanced data visualization techniques, one can fit tens of thousands data points onto a single screen — a difference of an order of magnitude. In The Visual Display of Quantitative Information, Edward Tufte gives an example of more than 21,000 data points effectively displayed on a US map that fits onto a single screen.
Demands by users of business intelligence (BI) applications to "just get it done" are turning typical BI relationships, such as business/IT alignment and the roles that traditional and next-generation BI technologies play, upside down. As business users demand more control over BI applications, IT is losing its once-exclusive control over BI platforms, tools, and applications. It's no longer business as usual: For example, organizations are supplementing previously unshakable pillars of BI, such as tightly controlled relational databases, with alternative platforms. Forrester recommends that business and IT professionals responsible for BI understand and start embracing some of the latest BI trends — or risk falling behind.
Traditional BI approaches often fall short for the two following reasons (among many others):
BI hasn't fully empowered information workers, who still largely depend on IT
BI platforms, tools and applications aren't agile enough
Emerging ARM server Calxeda has been hinting for some time that they had a significant partnership announcement in the works, and while we didn’t necessarily not believe them, we hear a lot of claims from startups telling us to “stay tuned” for something big. Sometimes they pan out, sometimes they simply go away. But this morning Calxeda surpassed our expectations by unveiling just one major systems partner – but it just happens to be Hewlett Packard, which dominates the WW market for x86 servers.
At its core (unintended but not bad pun), the HP Hyperscale business unit Project Moonshot and Calxeda’s server technology are about improving the efficiency of web and cloud workloads, and promises improvements in excess of 90% in power efficiency and similar improvements in physical density compared with current x86 solutions. As I noted in my first post on ARM servers and other documents, even if these estimates turn out to be exaggerated, there is still a generous window within which to do much, much, better than current technologies. And workloads (such as memcache, Hadoop, static web servers) will be selected for their fit to this new platform, so the workloads that run on these new platforms will potentially come close to the cases quoted by HP and Calxeda.
Marketing mix modeling solutions have been around for quite some time, providing marketers in several key categories with complex statistical models that aim to find the correlation between past marketing activities and business outcomes, like sales or market share.
However this space has recently seen significant changes, due to a few specific dynamics:
The proliferation of digital and social media with increasing importance in the marketing mix.
Marketers' increased demand for tools that are not only able to deliver insights on past campaigns but also able to give forward-looking recommendations on how to improve marketing return on investment (ROI) in the future.
The rising role that sophisticated software plays in integrating the ever-growing number of data streams and in enabling complex analysis to be navigated and customized via powerful graphic user interfaces.
To help navigate this complex and highly relevant space for senior marketers, our research team has published the first Forrester Wave™ for vendors in the marketing mix modeling space. We screened more than 30 vendors, shortlisted six that we consider to be the key players in this very fragmented market, and ranked them according to more than 40 different criteria. The evaluation uncovered a market in which:
MarketShare, Marketing Management Analytics, and ThinkVine lead the pack.
SymphonyIRI is a Leader but lacks collaborative functionalitites.
Marketing Analytics and Ninah are competitive Strong Performers.
Yesterday, HP agreed to buy UK software firm Autonomy Corp. for $10 billion to move into the enterprise information management (EIM) software business. HP wants to add IP to its portfolio, build next-generation information platforms, and create a vehicle for services. It is following IBM’s strategy of acquiring software to sell to accompany its hardware and services. With Autonomy under its wing, HP plans to help enterprises with a big, complicated problem – how to manage unstructured information for competitive advantage. Here’s the wrinkle – Autonomy hasn’t solved that problem. In fact, it’s not a pure technology problem because content is so different than data. It’s a people, process problem, too.
Here is the Autonomy overview that HP gave investors yesterday:
Of course, this diagram doesn’t look like the heterogeneous environment of a typical multinational enterprise. Autonomy has acquired many companies to fill in the boxes here, but the reality is that companies have products from a smorgasbord of content management vendors but no incentive to stick with any one of them.
There has been a great deal of talk over the past few years about what acronym will replace WCM (web content management). Web experience management? Web site management? Web engagement management? Web experience optimization? The list goes on and on.
Certainly, the evolution of the WCM term makes sense on paper, since traditional content management functionality now only makes up a portion of the products that WCM vendors now offer. WCM vendors are also in the content delivery/engagement business, and are even dipping their toes into web intelligence. However, Forrester clients still overwhelmingly ask about “WCM” and that term isn’t going away any time soon.
But even without changing the acronym, it is time to start thinking about WCM beyond just managing content or siloed websites or experiences. Instead, we need to think of how WCM will interact and integrate with other solutions – like search, recommendations, eCommerce, and analytics – in the customer experience management (CXM) ecosystem in order to enable businesses to manage experiences across customer touchpoints.
How are we handling this convergence at Forrester? Several of us who cover various CXM products – like Brian Walker (commerce), Bill Band (CRM), Joe Stanhope (web analytics), and myself (WCM) – teamed up to outline what our vision of CXM looks like, including process-based tools, delivery platforms, and customer intelligence. We've created two versions of the report: one written for Content & Collaboration professionals and one for eBusiness & Channel Strategy professionals.
I need your help. I am conducting research into business intelligence (BI) software prices: averages, differences between license and subscription deals, differences between small and large vendor offerings, etc. In order to help our clients look beyond just the software pricese and consider the fully loaded total cost of ownership, I also want to throw in service and hardware costs (I already have data on annual maintenance and initial training costs). I’ve been in this market long enough to understand that the only correct answer is “It depends” — on the levels of data complexity, data cleanliness, use cases, and many other factors. But, if I could pin you down to a ballpark formula for budgeting and estimation purposes, what would that be? Here are my initial thoughts — based on experience, other relevant research, etc.
Initial hardware as a percentage of software cost = 33% to 50%
Ongoing hardware maintenance = 20% of the initial hardware cost
Initial design, build, implementation of services. Our rule of thumb has always been 300% to 700%, but that obviously varies by deal sizes. So here’s what I came up with:
Less than $100,000 in software = 100% in services
$100,000 to $500,000 in software = 300% in services
$500,000 to $2 million in software = 200% in services
$2 million to $10 million in software = 50% in services
More than $10 million in software = 25% in services
Then 20% of the initial software cost for ongoing maintenance, enhancements, and support
Thoughts? Again, I am not looking for “it depends” answers, but rather for some numbers and ranges based on your experience.
Forrester is in the middle of a major research effort on various Big Data-related topics. As part of this research, we’ll be kicking off a client survey shortly. I’d like to solicit everyone’s input on the survey questions and answer options. Here’s the first draft. What am I missing?
Scope. What is the scope of your Big Data initiative?
Status. What is the status of your Big Data initiative?
Industry. Are the questions you are trying to address with your Big Data initiative general or industry-specific?
Domains. What enterprise areas does your Big Data initiative address?
Why BigData? What are the main business requirements or inadequacies of earlier-generation BI/DW/ET technologies, applications, and architecture that are causing you to consider or implement Big Data?
Velocity of change and scope/requirements unpredictability
Analysis-driven requirements (Big Data) vs. requirements-driven analysis (traditional BI/DW)
Cost. Big Data solutions are less expensive than traditional ETL/DW/BI solutions