Tablets aren’t the most powerful computing gadgets. But they are the most convenient.
They’re bigger than the tiny screen of a smartphone, even the big ones sporting nearly 5-inch screens.
They have longer battery life and always-on capabilities better than any PC — and will continue to be better at that than any ultrathin/book/Air laptop. That makes them very handy for carrying around and using frequently, casually, and intermittently even where there isn’t a flat surface or a chair on which to use a laptop.
And tablets are very good for information consumption, an activity that many of us do a lot of. Content creation apps are appearing on tablets. They’ll get a lot better as developers get used to building for touch-first interfaces, taking advantage of voice input, and adding motion gestures.
They’re even better for sharing and working in groups. There’s no barrier of a vertical screen, no distracting keyboard clatter, and it just feels natural to pass over a tablet, like a piece of paper, compared to spinning around a laptop.
While the bulk of the enterprise IT market grumbles about the maturity and security of cloud computing services, it looks like the media & entertainment segment is just doing it. At the annual conference for the National Association of Broadcasters (NAB) in Las Vegas, myriad technology vendors are showing off their solutions that are transforming the way video content gets to us and behind the scenes there appears to be a lot of cloud computing making this happen. And there is a strong fit between these two industries because their business and economic models are evolving in complementary ways.
Sure, we all know that video streaming to your phone, tablet and TV is the new normal, but how this is accomplished is changing under the covers and cloud computing brings the economic model that maps better to the business of media and entertainment. You see, while broadcasting is a steady state business, the production process and eventual popularity of any particular video segment or show isn't. The workflow behind the scenes is evolving rapidly — or more appropriately devolving.
I continue to believe that most consumers using an NFC device in 2012 will more likely use it for device-pairing or data-sharing purposes than for payments. Pairing NFC accessories and reading NFC smart tags will open up new opportunities. NFC will be a key technology for interacting with the world around you — and it is time to test it, as highlighted in this recent piece of research written by my colleague Anthony Mullen. There is an ongoing debate about bar codes’ potential replacement by NFC; I think both technologies serve different objectives and have different advantages but will continue to co-exist. Radio and optical technologies are converging, as highlighted by French startup Mobilead, which does a fantastic job of delivering a great branded experience mixing QR codes and NFC tags.
Microsoft recently announced that it will change to its European currency pricing policy from July 2012, and the effect could be a 20% price increase for UK customers. It didn’t publicize the change, preferring to let its resellers tell their customers as and when the change affects them, so I thought I’d tell my readers what you need to know. Firstly, here is some background. Most global software companies have one master price list in their home currency and reset price lists in other currencies every year or even every quarter using then-current exchange rates. Microsoft has always taken a different approach, having set €, £, and other prices in 2001 and continuing to use the same exchange rate ever since. There are pros and cons to this approach:
· Pro: local prices are stable and predictable. In contrast, € and £ prices from other US-based vendors may rise or fall by 20% from one year to the next as the currencies fluctuate. (This is one reason why SAP’s revenue rises and Oracle’s falls when the € weakens against the $, as these price changes affect demand.)
· Con: European companies pay more than their US-based peers. This doesn’t matter so much if you’re only competing with domestic rivals, but global companies see and resent the discrepancies.
Employees that use smart devices — PCs or mobile devices — for work have expanded their use of technology more than most people realize. How many devices do you think a typical information worker uses for work? If you only ask the IT staff, the answer will be that most use just a PC, some use a smartphone, and a few use a tablet. But our latest Forrsights workforce employee survey asked more than 9,900 information workers in 17 countries about all of the devices they use for work, including personal devices they use for work purposes. It turns out that they use an average of about 2.3 devices.
About 74% of the information workers in our survey used two or more devices for work — and 52% used three or more! This means that the typical information worker has to figure out how to manage their information from more than one device. So they’ll be increasingly interested in work systems and personal cloud services that enable easy multidevice access, such as Dropbox, Box, SugarSync, Google Docs/Apps, Windows Live, and Apple iCloud.
When you dig into the data, the mix of devices info workers use for work is different than what IT provides. About 25% are mobile devices, not PCs, and 33% use operating systems other than Microsoft.
My blog post Apple Infiltrates The Enterprise: 1/5 Of Global Info Workers Use Apple Products For Work! got lots of visibility because of how hot Apple is right now, but our data is much broader than just Apple. Our Forrsights Workforce and Hardware surveys have lots more data about all types of PCs and smart devices that information workers use for work, including types of operating systems — and we even know about what personal-only devices they have.
For example, as of the fall of 2011, the top three smartphone OSes have essentially the same share of the installed base of smartphones used for work by information workers across the globe (full-time workers in companies with 20 or employees who use a PC, tablet, or smartphone for work one hour or more per day). See the chart below and the reference in the Monday, January 30, New York Times article on Blackberry in Europe.
The proposed acquisitions of SuccessFactors by SAP, and of Emptoris by IBM got me thinking about the impact on buyers of market consolidation, in respect of the difference between dealing with independent specialists versus technology giants selling a large portfolio of products and services. Sourcing professionals talk about wanting “one throat to choke,” but personally I’ve never met one with hands big enough to get round the neck of a huge vendor such as IBM or Oracle. Moreover, many of the giants organize their sales teams by product line, to ensure they fully understand the product they are selling, rather than giving customers one account manager for the whole portfolio who may not understand any of it in sufficient depth. Our clients complain about having to deal with just as many reps as before the acquisitions. They all now have the same logo on their business card, but can’t fix problems outside their area, nor negotiate based on the complete relationship. It seems that buyers end up like Hercules, wrestling either with a Nemean lion or with a Lernaean hydra.
The acquirers' press releases tend to take it for granted that customers will be better off with the one-stop shop. Bill McDermott, co-CEO of SAP, said, “Together, SAP and SuccessFactors will create tremendous business value for customers.” While Lars Dalgaard, founder and CEO of SuccessFactors, talks about “expanding relationships with SAP’s 176,000 customers.” Craig Hayman, general manager of industry solutions at IBM, said, “Adding Emptoris strengthens the comprehensive capabilities we deliver and enables IBM to meet the specific needs of chief procurement officers."
For years I have been railing about cloud washing -- the efforts by vendors and, more recently, enterprise I&O professionals to give a cloud computing name to their business-as-usual IT services and virtualization efforts. Now, a cloud vendor, with tongue somewhat in cheek, is taking this rant to the next level.
Appirio, a cloud integration and customization solution provider, has created the cloud computing equivalent of the Razzie Awards to recognize and call out those vendors it and its clients see as the most egregious cloud washing offenders. The first annual Washies will be announced next Wednesday night at The Cigar Bar in San Francisco, and in true Razzie tradition, the nominees are invited to attend and pick up their dubious honors in person. I'm betting that Larry Ellison will be otherwise engaged.
Today HP announced a new set of technology programs and future products designed to move x86 server technology for both Windows and Linux more fully into the realm of truly mission-critical computing. My interpretation of these moves is that it is both a combined defensive and pro-active offensive action on HP’s part that will both protect them as their Itanium/HP-UX portfolio slowly declines as well as offer attractive and potentially unique options for both current and future customers who want to deploy increasingly critical services on x86 platforms.
Bearing in mind that the earliest of these elements will not be in place until approximately mid-2012, the key elements that HP is currently disclosing are:
ServiceGuard for Linux – This is a big win for Linux users on HP, and removes a major operational and architectural hurdle for HP-UX migrations. ServiceGuard is a highly regarded clustering and HA facility on HP-UX, and includes many features for local and geographically distributed HA. The lack of ServiceGuard is often cited as a risk in HP-UX migrations. The availability of ServiceGuard by mid-2012 will remove yet another barrier to smooth migration from HP-UX to Linux, and will help make sure that HP retains the business as it migrates from HP-UX.
Analysis engine for x86 – Analysis engine is internal software that provides system diagnostics, predictive failure analysis and self-repair on HP-UX systems. With an uncommitted delivery date, HP will port this to selected x86 servers. My guess is that since the analysis engine probably requires some level of hardware assist, the analysis engine will be paired with the next item on the list…