The second session of AudienceScience Summit this afternoon is a panel moderated by Quentin George, Chief Digital Officer of Mediabrands. Panelists include Dave Dickman, SVP of Digital Media Sales from Warner Bros. Television and Barbara Healy, VP of Online and Mobile Fulfillment at Tribune.
The theme of the panel was intended to address how these publishers manage their audience assets. But really the primary message I took away was that publishers are focusing on solution sells -- finding ways to sell more high margin offerings -- whatever these happen to be. I was expecting to hear more specifics about how they are working with publisher optimization solutions, or data management offerings. But it sounded instead that it was any and all efforts to create unique ad solutions, rather than just impressions.
Two points heard, one good, one bad:
1) Warner Bros talked about an alternative way to think about creative, empowering creatives to build original programming that airs on the Web and allows users to provide input into the plot and production that the program takes. This approach garnered premium sponsorship (from J&J) and helped creative resources feel a part of (and not irrelevant to) emerging media.
Coming to you live from the AudienceScience Targeting Summit in Las Vegas, a three day event for publishers and advertising talking about changes in display media and the value of targeting for both sides of the online advertising ecosystem: buyers and sellers. My presentation was part of the publisher day (Day one is for publishers, day two for both publishers and advertisers, and day three for advertisers alone) and spoke to the findings of a custom study I worked on for Audience Science earlier this year. The conclusions I shared today are:
Online advertising has significant growth in store
Audience and behavioral targeting will grow further advertiser investment in display media
And yet, advertisers still second guess display advertising value because it is so hard to take full advantage of (I walked through a laundry list of challenges online advertisers face like media proliferation, measurement challenges, $$ shifting downstream from branding to more direct sales channels, operations inefficiencies and limited staff)
So publishers must be ready to help create more automated, more dynamic, more data driven advertiser solutions to help advertisers overcome the challenges with using display today.
Product strategists struggle with the issue of value all the time: What constitutes a revenue-maximizing price for my product, given the audience I’m targeting, the competition I’m trying to beat, the channel for purchase, and the product’s overall value proposition?
There are tools like conjoint analysis that can help product strategists test price directly via consumer research. However, there’s a bigger strategic question in the background: How can companies create and sustain consistently higher prices than their key competitors over the long term?
The Mac represents a good case study for this business problem. Macs have long earned a premium over comparable Windows PCs. Though prices for Macs have come down over time, they remain relatively more expensive, on average, than Windows-based PCs. In fact, they’ve successfully cornered the market on higher-end PCs: According to companies that track the supply side, perhaps 90% of PCs that sold for over $1,000 in Q4, 2009 were Macs.
Macs share common characteristics with Windows PCs on the hardware front – ever since Apple switched to Intel processors about four years ago, they’ve had comparable physical elements. But the relative pricing for Macs has remained advantageous to Apple. At the same time, the Mac has gained market share and is bringing new consumers into the Mac family – for example, about half of consumers who bought their Mac in an Apple Store in Q1, 2010 were new to the Mac platform. So Apple is doing something right here – providing value to consumers to make them willing to pay more.
There’s a lot of hype out there by many vendors who claim that they have tools and technologies to enable BI end user self service. Do they? When you analyze whether your BI vendor can support end user self service, consider the following types of “self service” and related BI tool requirements:
#1. Self service for average, casual users.
What do these users need to do?
Run and lightly customize canned reports and dashboards
Run ad hoc queries
Add calculated measures
Fulfill their BI requirements with little or no training (typically one needs search-like, not point-and-click UI for this)
What capabilities do they need for this?
Report and dashboard templates
Customizable prompts, sorts, filters, and ranks
Report, query, dashboard building wizards
Semantic layer (not all BI vendor have a rich semantic layer)
Prompting for columns (not all BI vendors let you do that)
Drill anywhere (only BI vendors with ROLAP and multisourcing / data federation provide this capability)
#2. Self service for advanced, power users
What do these users need to do?
Perform what-if scenarios (this often requires write back, which very few BI vendors allow)
Add metrics, measures, and hierarchies not supported by the underlying data model (typically one needs some kind of in-memory analytics capability for this)
Explore based on new (not previously defined) entity relationships (typically one needs some kind of in-memory analytics capability for this)
Not knowing exactly what one is looking for (typically one needs search-like UI for this)
Over the past few months, I had the opportunity to interview representatives from 10 leading technology service providers about how they help their clients innovate. My recent research summarizing those interviews is available to Forrester clients on our website. For those interested in the high level points I raised, here are a few of the key findings:
Last week, I was in LA, hosting a session on online panel quality at Forrester’s Marketing Forum. I discussed the past, present, and future of online panel quality with Steve Schwartz from Microsoft, Maria Cristina Gomez from Procter & Gamble, and Frank Findley from ARS Group.
Online panel quality is still a major issue in the industry. The whole discussion started in 2006 with a speech by Kim Dedeker -- at that time, the VP of global consumer and market knowledge at Procter & Gamble. In it, she publicly expressed her concerns about online panel quality, how it affected their research results, and, as a result, the credibility of market research. In her speech, she stressed that, in her opinion, the industry – both research suppliers and clients – needed to focus on how to improve the overall quality of research. Her appeal to the industry was very successful. Many other research buyers weighed in with their stories, and the research providers took up the challenge. Since then, many initiatives have started, such as the ARF’s Foundation of Quality and ESOMAR’s 26 questions, as well as more technology-driven approaches like Peanut Labs’ Optimus and MarketTools’ TrueSample.
The Hulu-will-charge-you-money rumor mill is churning once again and the blogosphere has lit up with preemptively angered Hulu viewers vowing that they will never darken Hulu’s digital door again. Some call it greed, others point to nefarious pressure from ailing broadcast and cable operations, while some decry the end of a freewheeling era. They are all wrong.
Hulu charging for content is a good thing. In fact, it’s a necessary next step to get us where we need to be. Let me explain.
This comes at an awkward time, to say the least. The site’s CEO, Jason Kilar, admitted just weeks ago that the free site is profitable, taking in more than $100 million last year and on a run-rate to more than double that this year. Blunting that momentum would be foolish. But letting it run absent the burden of helping to pay for the shows it profits from would also be irresponsible, and not in a Father-knows-best “charging for content builds character” kind of irresponsible, but in a more “not taking advantage of the opportunity to take Hulu to the next level in benefit of the consumer” kind of irresponsible.
In general, online African Americans are less well-off and spend less while shopping online compared with other online consumers. However, several factors point to the opportunity of further engaging with this group. Our Technographics® research shows that African American online users are much less annoyed by the amount of advertising today compared with online users overall: 60% of the US online population agree that they are annoyed by advertising, versus only 39% of online African Americans. Furthermore, ads inform the purchase decisions that online African Americans make: Nearly twice as many African American online users (27%) as overall online users (15%) agree that ads help them decide what to buy.
Furthermore, 24% of online African Americans recognize that owning the best brand is important to them, compared with only 16% of all US online consumers. Therefore, brand reputation is a much bigger influencer in their purchase decision process.
The IT Services Marketing Association (ITSMA) has just published this interview with me to its members to coincide with my presentation on this topic at the Forrester Marketing Forum here in Los Angeles. For those European members of ITSMA, I’d like to point out that I will be hosting and contributing to the ITSMA workshop “Building the Business Case for Social Media in B2B Marketing” in London on May 5th. Perhaps I will see you there . Anyway, I’m enjoying our conversations, so keep your comments and emails coming.
Always keeping you informed!
In this Viewpoint, Peter O’Neill, VP & Principal Analyst, Forrester, shares his research on and passion for international technology industry marketing, with a specific emphasis on field marketing strategy and execution, including the dynamics of interactions between headquarters and field marketing organizations.
ITSMA:What challenges do marketers face due to globalization?
O’Neill: Our clients often ask the basic question: What does it mean to "go global"? Well, going global really means having customers in multiple countries—i.e., in local geographic
Last week I published two research reports on the hottest topic in PCI: Tokenization and Transaction Encryption. Part 1 was an introduction into the topic and Part 2 provided some action items for companies to consider during their evolution of these technologies. Respected security blogger, Martin McKeay, commented on Part 1. Serendipitously, Martin was also in Dallas (where I live) last week and we got an opportunity to chat in person about the report and other security topics.
Martin’s post highlighted several issues that deserve some response. He felt that I, “glossed over several important points people who are considering either technology need to be aware of.” Let me review those items:
Comment: “This is one form of tokenization, but it completely ignores another form of tokenization that’s been on the rise for several years; internal tokenization by the merchant with a (hopefully) highly secure database that acts as a central repository for the merchant’s cardholder data, while the remainder of the card flow stays the same as it is now.”