There’s a lot of hype out there by many vendors who claim that they have tools and technologies to enable BI end user self service. Do they? When you analyze whether your BI vendor can support end user self service, consider the following types of “self service” and related BI tool requirements:
#1. Self service for average, casual users.
What do these users need to do?
Run and lightly customize canned reports and dashboards
Run ad hoc queries
Add calculated measures
Fulfill their BI requirements with little or no training (typically one needs search-like, not point-and-click UI for this)
What capabilities do they need for this?
Report and dashboard templates
Customizable prompts, sorts, filters, and ranks
Report, query, dashboard building wizards
Semantic layer (not all BI vendor have a rich semantic layer)
Prompting for columns (not all BI vendors let you do that)
Drill anywhere (only BI vendors with ROLAP and multisourcing / data federation provide this capability)
#2. Self service for advanced, power users
What do these users need to do?
Perform what-if scenarios (this often requires write back, which very few BI vendors allow)
Add metrics, measures, and hierarchies not supported by the underlying data model (typically one needs some kind of in-memory analytics capability for this)
Explore based on new (not previously defined) entity relationships (typically one needs some kind of in-memory analytics capability for this)
Not knowing exactly what one is looking for (typically one needs search-like UI for this)
Over the past few months, I had the opportunity to interview representatives from 10 leading technology service providers about how they help their clients innovate. My recent research summarizing those interviews is available to Forrester clients on our website. For those interested in the high level points I raised, here are a few of the key findings:
Last week, I was in LA, hosting a session on online panel quality at Forrester’s Marketing Forum. I discussed the past, present, and future of online panel quality with Steve Schwartz from Microsoft, Maria Cristina Gomez from Procter & Gamble, and Frank Findley from ARS Group.
Online panel quality is still a major issue in the industry. The whole discussion started in 2006 with a speech by Kim Dedeker -- at that time, the VP of global consumer and market knowledge at Procter & Gamble. In it, she publicly expressed her concerns about online panel quality, how it affected their research results, and, as a result, the credibility of market research. In her speech, she stressed that, in her opinion, the industry – both research suppliers and clients – needed to focus on how to improve the overall quality of research. Her appeal to the industry was very successful. Many other research buyers weighed in with their stories, and the research providers took up the challenge. Since then, many initiatives have started, such as the ARF’s Foundation of Quality and ESOMAR’s 26 questions, as well as more technology-driven approaches like Peanut Labs’ Optimus and MarketTools’ TrueSample.
The Hulu-will-charge-you-money rumor mill is churning once again and the blogosphere has lit up with preemptively angered Hulu viewers vowing that they will never darken Hulu’s digital door again. Some call it greed, others point to nefarious pressure from ailing broadcast and cable operations, while some decry the end of a freewheeling era. They are all wrong.
Hulu charging for content is a good thing. In fact, it’s a necessary next step to get us where we need to be. Let me explain.
This comes at an awkward time, to say the least. The site’s CEO, Jason Kilar, admitted just weeks ago that the free site is profitable, taking in more than $100 million last year and on a run-rate to more than double that this year. Blunting that momentum would be foolish. But letting it run absent the burden of helping to pay for the shows it profits from would also be irresponsible, and not in a Father-knows-best “charging for content builds character” kind of irresponsible, but in a more “not taking advantage of the opportunity to take Hulu to the next level in benefit of the consumer” kind of irresponsible.
In general, online African Americans are less well-off and spend less while shopping online compared with other online consumers. However, several factors point to the opportunity of further engaging with this group. Our Technographics® research shows that African American online users are much less annoyed by the amount of advertising today compared with online users overall: 60% of the US online population agree that they are annoyed by advertising, versus only 39% of online African Americans. Furthermore, ads inform the purchase decisions that online African Americans make: Nearly twice as many African American online users (27%) as overall online users (15%) agree that ads help them decide what to buy.
Furthermore, 24% of online African Americans recognize that owning the best brand is important to them, compared with only 16% of all US online consumers. Therefore, brand reputation is a much bigger influencer in their purchase decision process.
The IT Services Marketing Association (ITSMA) has just published this interview with me to its members to coincide with my presentation on this topic at the Forrester Marketing Forum here in Los Angeles. For those European members of ITSMA, I’d like to point out that I will be hosting and contributing to the ITSMA workshop “Building the Business Case for Social Media in B2B Marketing” in London on May 5th. Perhaps I will see you there . Anyway, I’m enjoying our conversations, so keep your comments and emails coming.
Always keeping you informed!
In this Viewpoint, Peter O’Neill, VP & Principal Analyst, Forrester, shares his research on and passion for international technology industry marketing, with a specific emphasis on field marketing strategy and execution, including the dynamics of interactions between headquarters and field marketing organizations.
ITSMA:What challenges do marketers face due to globalization?
O’Neill: Our clients often ask the basic question: What does it mean to "go global"? Well, going global really means having customers in multiple countries—i.e., in local geographic
Last week I published two research reports on the hottest topic in PCI: Tokenization and Transaction Encryption. Part 1 was an introduction into the topic and Part 2 provided some action items for companies to consider during their evolution of these technologies. Respected security blogger, Martin McKeay, commented on Part 1. Serendipitously, Martin was also in Dallas (where I live) last week and we got an opportunity to chat in person about the report and other security topics.
Martin’s post highlighted several issues that deserve some response. He felt that I, “glossed over several important points people who are considering either technology need to be aware of.” Let me review those items:
Comment: “This is one form of tokenization, but it completely ignores another form of tokenization that’s been on the rise for several years; internal tokenization by the merchant with a (hopefully) highly secure database that acts as a central repository for the merchant’s cardholder data, while the remainder of the card flow stays the same as it is now.”
Calling all tech industry marketing and strategy professionals! We need some help with our current research on market opportunity assessement.
"Where in the world are you? And, how'd you get there?"
Strategists in the tech industry face a continuous stream of critical decisions in today’s complex global market. One of those is “where in the world?” One the one hand, globalization expands the options available, making it “easier” to enter new markets. However, those decisions aren’t always themselves easy. To better understand how strategists are undertaking the tasks of identifying, evaluating and prioritizing technology market opportunities in new geographies, we have launched a short survey. The survey questions include background on market presence and intended entry, data sources and factors that influence these decisions, stakeholders' involvement, and the process itself. This is where we need your help. If you are part of a team or team leader for strategic planning in global markets, we’re interested in your input. The data gathered will be used for an upcoming report – Where in the World? Tech vendor strategists weigh opportunities (and risks) of expansion (working title). The report will also use public data and research interviews (where we'd also like your help).
The survey should take no more than 15 minutes and participants who complete the survey will receive a complimentary copy of the completed report. Terms and conditions (the fine print): As always, we keep your individual responses confidential.
Like many movements before it, IT is rapidly evolving to an industrial model. A process or profession becomes industrialized when it matures from an art form to a widespread, repeatable function with predictable result and accelerated by technology to achieve far higher levels of productivity. Results must be deterministic (trustworthy) and execution must be fast and nimble, two related but different qualities. Customer satisfaction need not be addressed directly because reliability and speed result in lower costs and higher satisfaction.
IT should learn from agriculture and manufacturing, which have perfected industrialization. In agriculture, productivity is orders of magnitude better. Genetic engineering made crops resistant to pests and environmental extremes such as droughts while simultaneously improving consistency. The industrialized evolution of farming means we can feed an expanding population with fewer farmers. It has benefits in nearly every facet of agricultural production.
Manufacturing process improvements like the assembly line and just-in-time manufacturing combined with automation and statistical quality control to ensure that we can make products faster and more consistently, at a lower cost. Most of the products we use could not exist without an industrialized model.
This is my first post as the new Research Director for the Security and Risk team here at Forrester. During my first quarter as RD, I spent a lot of time listening to our clients and working with the analysts and researchers on my team to create a research agenda for the rest of the year that will help our clients tackle their toughest challenges. It was a busy Q1 for the team. We hosted our Security Forum in London, fielded more than 443 end client inquiries, completed more than 18 research reports, and delivered numerous custom consulting engagements.
In the first quarter of 2010, clients were still struggling with the security ramifications of increased outsourcing, cloud computing, consumer devices and social networking. Trends have created a shift in data and device ownership that is usurping traditional IT control and eroding traditional security controls and protections.
We’re still dealing with this shift in 2010 — there’s no easy fix. This year there is a realization that the only way that the Security Organization can stay one step ahead of whatever business or technology shift happens next is to transform itself from a silo of technical expertise that is reactive and operationally focused to one that is focused on proactive information risk management. This requires a reexamination of the security program itself (strategy, policy, roles, skills, success metrics, etc.), its security processes, and its security architecture. In short, taking a step back and looking at the big picture before evaluating and deploying the next point protection product. Not surprisingly, our five most read docs since January 1, 2010 to today are having less to do with specific security technologies: