Pulling Off A Razor-Razorblade Product Strategy, Like Amazon's Product Strategists

JP Gownder

Amazon’s product strategists shocked some constituencies with their $199 price point for the Amazon Kindle Fire tablet announced today.  But there’s a fundamental product strategy lesson to this pricing, and it’s an old product strategy model:  The so-called Razor-Razorblade Pricing model.

We all know this model well, as consumers: your initial purchase of razor is relatively cheap, but the cost of replacement razorblades really adds up over time. If you don’t buy razors, perhaps you’re familiar with this scenario from your inkjet printer.  Remember how cheap that scanner/printer was -- but have you ever seen the price of refill inkjet cartridges?

The Razor-Razorblade model works when “dependent goods” – the refills, the stuff you need to keep buying to use the product – are closely related to the anchor product.  In the case of the Amazon Fire Tablet, the dependent goods are content and services – MP3s, streaming videos, and of course books, magazines, newspapers, etc. and cloud services that allow you to store and synchronize your content across devices. Amazon’s product strategists can afford to charge a low entry price to raise adoption of the device, and then (they hope) deliver an experience that’s attractive enough for Kindle Fire owners to pay for as a service.

Hence Amazon CEO Jeff Bezos’ portrayal of the Kindle Fire product strategy:  “What we are doing is offering premium products at non- premium prices,” Bezos says. Other tablet contenders “have not been competitive on price” and “have just sold a piece of hardware. We don’t think of the Kindle Fire as a tablet. We think of it as a service.”

Read more

Now This Is How To Do The App Internet Right — Autodesk Cloud Shows The Way

James Staten

Much of the discussion around integrating applications with the Internet has centered on mobile applications connected to web backends that deliver greater customer experiences than mobile apps or web sites could by themselves. But the real power of this concept comes when a full ecosystem can be delivered that leverages the true power and appropriateness of mobile, desktop and cloud-based compute power. And if you want to see this in action, just look to Autodesk. The company, we highlighted in this blog last year for its early experimentation with cloud-based rendering, has moved that work substantially forward and aims to change the way architects, engineers and designers get their jobs done and dramatically improve how they interact with clients.

Read more

Oracle Delivers On SPARC Promises With New T4 Processors And Systems

Richard Fichera

Background – Promises And Potential

Last year I wrote about Oracle’s new plans for SPARC, anchored by a new line of SPARC CPUs engineered in conjunction with Fujitsu (Does SPARC have a Future?), and commented that the first deliveries of this new technology would probably be in early 2012, and until we saw this tangible evidence of Oracle’s actual execution of this road map we could not predict with any confidence the future viability of SPARC.

The T4 CPU

Fast forward a year and Oracle has delivered the first of the new CPUs, ahead of schedule and with impressive gains in performance that make it look like SPARC will remain a viable platform for years. Specifically, Oracle has introduced the T4 CPU and systems based on them. The T4, an evolution of Oracle’s highly threaded T-Series architecture, is implemented with an entirely new core that will form the basis, with variations in number of threads versus cores and cache designs, of the future M and T series systems. The M series will have fewer threads and more performance per thread, while the T CPUs will, like their predecessors, emphasize throughput for highly threaded workloads. The new T4 will have 8 cores, and each core will have 8 threads. While the T4 emphasizes highly threaded workload performance, it is important to note that Oracles has radically improved single-thread performance over its predecessors, with Oracle claiming performance per thread improvements of 5X over its predecessors, greatly improving its utility as a CPU to power less thread-intensive workloads as well.

The SPARC SuperCluster

Read more

New Study Yields Eye-Opening IT Service Management Benefits

Glenn O'Donnell

In April and May of this year, Forrester and the IT Service Management Forum’s US chapter (itSMF-USA) conducted a joint study to assess the state of ITSM. We collected data from 491 qualified subjects that are heavily involved in ITSM efforts (69% have two or more years of ITSM experience and 95% hold some level of ITIL certification; 50% at an advanced level). Since it was in conjunction with the US chapter, the responses were heavily US-centric.

The results offer empirical evidence of something ITSM professionals already know: ITSM offers significant benefits to the organization and to the professionals themselves. The full report is now in the final editing stages and will be available soon to all Forrester clients, all itSMF-USA members, and all participants who do not already fall into one of those groups. Forrester clients and itSMF USA members will receive email notifications when it is ready. Others will be contacted directly by itSMF.

This morning (Monday, September 26, 2011), I presented the results at the itSMF-USA’s national conference known as Fusion 11. Here are a few key insights from the study:

  • 51% of ITSM efforts are driven primarily by IT or business executives
  • ITIL has had an overwhelming positive impact on:
    • Organizational productivity: 85% positive and 2% negative
    • Service quality: 83% positive and 1% negative
    • IT’s reputation with the business: 65% positive and 3% negative
    • Operational costs: 41% positive and 4% negative
Read more

Security & Risk And Infrastructure & Operations Pros: Drive Customer Growth And Business Differentiation

Laura Koetzle

Security & Risk (S&R) chiefs and Infrastructure & Operations (I&O) leaders have a lot in common, and in great companies, we work in concert to run an efficient, reliable technology infrastructure that keeps critical business assets safe. Much has changed in the world of technology since I pulled my first all-nighter in a data center (falling asleep next to the EMC Symmetrix array was not one of my better ideas – those corners were sharp!), but that partnership is still the same – it takes security engineers and network/server engineers working together to solve really thorny problems.

We have our frictions, of course – I&O pros prioritize operational stability and continuity of service, while S&R pros must occasionally interrupt that continuity to contain security breaches. But when a serious incident (whether security breach or system failure) threatens to sideline our business systems, it falls to us to find and fix the problems – together. We may be organizationally separate now, with I&O reporting into the CIO and the CISO reporting into a COO or Head of Operational Risk, but we share a set of fundamental challenges.  We must excel in our own domains (not exactly a cakewalk) but also anticipate and deliver on what our businesses need (much harder).

 And what our businesses seek today is growth – in Forrester’s most recent survey of business decision-makers, the top two priorities were growing overall company revenue and acquiring and retaining customers. S&R pros have already worked hard to escape their “Department of No” reputations, and I&O pros have labored tirelessly to get out of the data center and into the business. 

But that’s not enough. 

Read more

Categories:

The Revenge Of The Politburo! One Company's Quest For Soviet-esque Virtual Desktop Infrastructure

David Johnson

The Politburo is making a comeback
Winston Churchill described Soviet-era politics as a riddle, wrapped in a mystery, inside an enigma. It came to mind recently as I was engaged in a conversation with an I&O professional who works for a US-based company, and he needed help. Seems his executives had decided that due to two data breaches over the past year from stolen hard drives, that the new Central Committee policy should be to have everyone use a locked down virtual desktop, no matter their role or workstyle. It was hard for me to conjure up a picture of the profound lack of understanding that led to such a misguided policy, though images of nondescript buildings, row after row of undifferentiated cubicles, and Gulag-style productivity quotas came quickly to mind. Had he not been on the other end of a telephone line, he could've knocked me over with a feather.

Big vendors are using top party relationships to push huge pork-barrel deals under the banner of security and mobility

Read more

Brocade Offers I&O An Opportunity To Control Costs With Their Subscription Program

Andre Kindness

Brocade isn’t the loudest networking vendor on the block, but more than two weeks ago it released a subscription switching service that should have sent a shockwave through the industry. With Brocade Network Subscription,customers pay for their network infrastructure on a monthly basis.  Sadly, the new service was not some new xfabric or new-fangled technology, the industry was quick to dismiss the news as anything more than another cloud announcement, and so Brocade’s subscription program registered only a murmur. What was missed was that the service helps to solidify I&O as a business unit on the same level as manufacturing, services, energy, and other businesses.

I’ve written extensively about how networking solutions need to support two business realities: 1) Enterprises are embedding themselves in their customers’ lives, and 2) businesses are forming symbiotic relationships with their vendors. In regard to the latter, businesses want to ensure that their vendor is creating products and solutions that are in the best interest of that company, and so there is an expectation that their partners will carry some of the financial risk and burden, ensuring that they will stay committed. On the vendor side and with respect to embedding themselves, the reasoning is twofold. First, Wall Street rewards recurring revenue streams, and this is more likely if the vendor can create something the customers can only get from that particular source. Second, vendors know it costs ten times as much to find new customers and would prefer to have a customer keep coming back to keep their operating costs as low as possible.

As a result, there has been a shift to a subscription service model. Take for example three distinct markets that support this strategy:

Read more

Intel Developer Forum (IDF) - Cloud. And Cloud, Cloud, Cloud. Oh, Yes, Did I Mention “Cloud”?

Richard Fichera

I just attended IDF and I’ve got to say, Intel has certainly gotten the cloud message. Almost everything is centered on clouds, from the high-concept keynotes to the presentations on low-level infrastructure, although if you dug deep enough there was content for general old-fashioned data center and I&O professionals. Some highlights:

Chips and processors and low-level hardware

Intel is, after all, a semiconductor foundry, and despite their expertise in design, their true core competitive advantage is their foundry operations – even their competitors grudgingly acknowledge that they can manufacture semiconductors better than anyone else on the planet. As a consequence, showing off new designs and processes is always front and center at IDF, and this year was no exception. Last year it was Sandy Bridge, the 22nm shrink of the 32nm Westmere (although Sandy Bridge also incorporated some significant design improvements). This year it was Ivy Bridge, the 22nm “tick” of the Intel “tick-tock” design cycle. Ivy Bridge is the new 22nm architecture and seems to have inherited Intel’s recent focus on power efficiency, with major improvements beyond the already solid advantages of their 22nm process, including deeper P-States and the ability to actually shut down parts of the chip when it is idle. While they did not discuss the server variants in any detail, the desktop versions will get an entirely new integrated graphics processor which they are obviously hoping will blunt AMD’s resurgence in client systems. On the server side, if I were to guess, I would guess more cores and larger caches, along with increased support for virtualization of I/O beyond what they currently have.

Read more

An Early Look at Windows Server 8 – Can You Say Cloud?

Richard Fichera

Well, maybe everybody is saying “cloud” these days, but my first impression of Microsoft Windows Server 8 (not the final name) is that Microsoft has been listening very closely to what customers want from an OS that can support both public and private enterprise cloud implementations. And most importantly, the things that they have built into WS8 for “clouds” also look like they make life easier for plain old enterprise IT.

Microsoft appears to have focused its efforts on several key themes, all of which benefit legacy IT architectures as well as emerging clouds:

  • Management, migration and recovery of VMs in a multi-system domain – Major improvements in Hyper-V and management capabilities mean that I&O groups can easily build multi-system clusters of WS8 servers, and easily migrate VMs across system boundaries. Muplitle systems can be clustered with Fibre Channel, making it easier to implement high-performance clusters.
  • Multi-tenancy – A host of features, primarily around management and role-based delegation that make it easier and more secure to implement multi-tenant VM clouds.
  • Recovery and resiliency – Microsoft claims that they can failover VMs from one machine to another in 25 seconds, a very impressive number indeed. While vendor performance claims are always like EPA mileage – you are guaranteed never to exceed this number – this is an impressive claim and a major capability, with major implications for HA architecture in any data center.
Read more

NetIQ + Novell: A Nice Combo That Could Be Even Better If ...

Glenn O'Donnell

 

On 22-Nov-2010, Attachmate Corporation announced it was acquiring the assets of Novell, Inc. Once on top of the IT world, Novell's glory had clearly faded. Along the way, however, it acquired several attractive assets of its own (e.g., PlateSpin, Managed Objects). Towards the end of its independence, the future certainly looked bleak for Novell and especially its management software businesses.

The immediate reaction to the Attachmate acquisition was skepticism among most industry watchers, including yours truly. My reaction was similar when Attachmate acquired NetIQ. After all, what rationale is there to a legacy mainframe software company buying either NetIQ or Novell? The perception was that all of these product families would be milked for their maintenance revenue and innovation, and other development would be killed. It now appears these fears were largely unfounded, though I stand by my original skepticism. Veterans like me have seen such things unravel before.

The various Novell assets have been redistributed across four companies in the Attachmate Group, with the management assets being assimilated under the NetIQ brand. While a full merger of the NetIQ and Novell assets will take at least a year, the (now) NetIQ team has moved with impressive speed to launch its initial consolidated families.

Read more