Sedition is simmering in the halls of corporations the world over, as the thirst for productivity and new tools grows faster than IT organizations can quench it with supply. 2012 promises to be the most transformative year for end user computing since the release of the IBM PC in 1981. The escalation of 4 trends - each individually interesting but together explosive, will bring phase changes in the way Highly Empowered and Resourceful Operatives work, and offer previously captive employees new options for productive freedom by this time next year.
As in IT revolutions past, on the front lines are restless high-performers (executives, technology pros and creatives), whose nature drives them to push the limits of themselves, their tools, and their support networks, and bring their own technology to the office when their employers won't provide it. More employees will bring their own computer to the office than ever before in 2012 - most of them Macs - and if IT won't support them, they'll find another way that doesn't include IT.
Cloud-based applications and services such as Dropbox and Projectplace are convincing these folks that they can get better results faster, without IT involved. And these services are priced at a point where it's cheaper than a few skinny soy chai lattes (no whip!) every week, so many employees just pay the tab themselves.
The IT infrastructure and operations (I&O) organization is no different from any other business function. It employs a multitude of assets to create corporate value. Traditionally, however, I&O’s ability to manage its IT assets has been weak, from both a financial control and an IT asset life-cycle (ITALM) perspective.
Far too often, an I&O organization lacks the necessary controls to avoid IT wastage or remain compliant with software licensing or regulatory requirements. Thankfully (or unfortunately), to date most I&O organizations have been able to get by. But the-times-they-are-a-changing, as do-more-with-less efficiency mandates are prioritized, vendor software audits increase, and the business places greater focus on what IT costs and the value that internal IT delivers. Something has got to give and I&O leaders can step up their game and respond to these internal and external pressures by improving asset management processes to ensure that IT assets are leveraged to maximize the value generated for their parent business.
The IT Infrastructure & Operations (I&O) community has long been awash with management buzzwords and phrases such as "think outside the box," “bare metal,” “IT-to-business alignment,” “ivory tower,” “NextGen,” “people, process, and technology,” “innovation,” "what does good look like?" and “resonate.” More recently we have had to endure such gems as “cloudwashing,” “hash tag abuse,” “virtual sprawl,” and “cloudenomics” (please take a deep breath, don’t let them wind you up).
Another longstanding “buzzphrase” (no, I didn’t make this word up) is that I&O organizations need to “run IT as a business.” I imagine that most of us have used it (I plead “guilty” milord), at least in conversation, but do we really know what it means or what we need to do for I&O to achieve a business-like state?
Firstly, the “run IT as a business” mantra is wrong – well, partially. I&O organizations must indeed adopt practices to run as a business function, but not necessarily as a full business in itself.
One of the most prevalent areas in need of attention is that of the ITIL-espoused discipline of IT financial management. In that business-success not only stems from having a great product (or service) coupled with great customer service, there also needs to be an understanding of the cost of provision, the cost drivers, and the margins involved. Not having this understanding can only expose I&O’s lack of business acumen and capabilities, and make it difficult to compete in the new IT delivery landscape.
I was at Marc Benioff’s subversive non-keynote at Oracle OpenWorld yesterday, and while it was fun to see all the hoopla (employees holding posters of Benioff cast as a dissident, shouting, honking, donuts, cocktails), it was also cool to have the "I was there when" moment as Oracle’s future biggest competitor draws the lines of battle that are likely to shape the enterprise software industry for the next decade. Truth be told, I think that Benioff was a bit too caught up in the fuss and the cloudwash to make me think he’s a mature and credible competitor yet, but he is clearly getting his gumption up.
Benioff pointed a finger at Exadata as a new mainframe, locking customers into proprietary hardware and forcing them to buy over-expensive gear from an industry monolith. He described his own company as “open,” allowing customers to move to any platform or any cloud; "philanthropic," donating $24 million in grants and using their OOW booth as an engine for giving; and "social," leveraging their internal social media engine Chatter to coordinate their rapid mobilization to deliver the non-keynote within 16 hours of being cancelled by Larry. And all of that is cool, but I think he skewed to tech industry buzz rather than focus on the real competitive forces between Oracle and Salesforce.
OK, out of respect for your time, now that I’ve caught you with a title that promises some drama I’ll cut to the chase and tell you that I definitely lean toward the former. Having spent a couple of days here at Oracle Open World poking around the various flavors of Engineered Systems, including the established Exadata and Exalogic along with the new SPARC Super Cluster (all of a week old) and the newly announced Exalytic system for big data analytics, I am pretty convinced that they represent an intelligent and modular set of optimized platforms for specific workloads. In addition to being modular, they give me the strong impression of a “composable” architecture – the various elements of processing nodes, Oracle storage nodes, ZFS file nodes and other components can clearly be recombined over time as customer requirements dictate, either as standard products or as custom configurations.
After three days of cloudwashing, cloud-in-a-box and erector set private cloud musings at Oracle OpenWorld in San Francisco this week, CEO Larry Ellison chose day four to take the wraps off a legitimate move into cloud computing.
Oracle Public Cloud is the unification of the company's long-struggling software-as-a-service (SaaS) portfolio with its Fusion applications transformation, all atop Oracle VM and Sun hardware. While Ellison spent much of his keynote taking pot shots at his former sales executive and now SaaS nemesis, Salesforce CEO Mark Benioff, the actual solution being delivered is more of a direct competitor to Amazon Web Services than Force.com. The strongest evidence is in Oracle's stance on multitenancy. Ellison adamantly shunned a tenancy model built on shared data stores and application models, which are key to the profitability of Salesforce.com (and most true SaaS and PaaS solutions), stating that security comes only through application and database isolation and tenancy through the hypervisor. Oracle will no doubt use its own Xen-based hypervisor, OracleVM rather than the enterprise standard VMware vSphere, but converting images between these platforms is quickly proving trivial.
Well actually I meant mobs of flash, but I couldn’t resist the word play. Although, come to think of it, flash mobs might be the right way to describe the density of flash memory system vendors here at Oracle Open World. Walking around the exhibits it seems as if every other booth is occupied by someone selling flash memory systems to accelerate Oracle’s database, and all of them claiming to be: 1) faster than anything that Oracle, who already integrates flash into its systems, offers, and 2) faster and/or cheaper than the other flash vendor two booths down the aisle.
All joking aside, the proliferation of flash memory suppliers is pretty amazing, although a venue devoted to the world’s most popular database would be exactly where you might expect to find them. In one sense flash is nothing new – RAM disks, arrays of RAM configured to mimic a disk, have been around since the 1970s but were small and really expensive, and never got on a cost and volume curve to drive them into a mass-market product. Flash, benefitting not only from the inherent economies of semiconductor technology but also from the drivers of consumer volumes, has the transition to a cost that makes it a reasonable alternative for some use case, with database acceleration being probably the most compelling. This explains why the flash vendors are gathered here in San Francisco this week to tout their wares – this is the richest collection of potential customers they will ever see in one place.
My colleague James Staten recently wrote about AutoDesk Cloud as an exemplar of the move toward App Internet, the concept of implementing applications that are distributed between local and cloud resources in a fashion that is transparent to the user except for the improved experience. His analysis is 100% correct, and AutoDesk Cloud represents a major leap in CAD functionality, intelligently offloading the inherently parallel and intensive rendering tasks and facilitating some aspects of collaboration.
But (and there’s always a “but”), having been involved in graphics technology on and off since the '80s, I would say that “cloud” implementation of rendering and analysis is something that has been incrementally evolving for decades, with hundreds of well-documented distributed environments with desktops fluidly shipping their renderings to local rendering and analysis farms that would today be called private clouds, with the results shipped back to the creating workstations. This work was largely developed and paid for either by universities and by media companies as part of major movie production projects. Some of them were of significant scale, such as “Massive,” the rendering and animation farm for "Lord of the Rings" that had approximately 1,500 compute nodes, and a subsequent installation at Weta that may have up to 7,000 nodes. In my, admittedly arguable, opinion, the move to AutoDesk Cloud, while representing a major jump in capabilities by making the cloud accessible to a huge number of users, does not represent a major architectural innovation, but rather an incremental step.
Product strategists in many industries (from CPG to consumer electronics to financial services) share a challenge with their marketing colleagues: how to leverage the power of brand. Product strategists have a number of strategic tools in their toolboxes for differentiating their products from competitors’ offerings: features (a different taste, a new technical capability, or a higher interest rate, for instance); channel, price, or brand (or based on some combination of these factors). For the moment, let’s think about brand, because some product strategists design and build their products based largely on the promise implied by their brand name.
Forrester’s new research report– leveraging a multi-year analysis of Consumer Technographics® data – shows that while brand is important, brand loyalty (defined as the propensity to repurchase a brand) has been waning. The new report, entitled “Brand Loyalty Isn’t Enough For Products Anymore,” reveals that:
· Brand loyalty is on the decline. Brand loyalty dropped in the U.S. from 2006 to 2010, our data shows. One reason? The Great Recession. Another? The strength of brands themselves: competing brands in the marketplace entice consumers to try new brands.
In the good old days, computer industry trade shows were bigger than life events – booths with barkers and actors, ice cream and espresso bars and games in the booth, magic acts and surging crowds gawking at technology. In recent years, they have for the most part become sad shadows of their former selves. The great SHOWS are gone, replaced with button-down vertical and regional events where you are lucky to get a pen or a miniature candy bar for your troubles.
Enter Oracle OpenWorld. Mix 45,000 people, hundreds of exhibitors, one of the world’s largest software and systems company looking to make an impression, and you have the new generation of technology extravaganza. The scale is extravagant, taking up the entire Moscone Center complex (N, S and W) along with a couple of hotel venues, closing off a block of a major San Francisco street for a week, and throwing a little evening party for 20 or 30 thousand people.
But mixed with the hoopla, which included wheel of fortune giveaways that had hundreds of people snaking around the already crowded exhibition floor in serpentine lines, mini golf and whack-a-mole-games in the exhibit booths along with the aforementioned espresso and ice cream stands, there was genuine content and the public face of some significant trends. So far, after 24 hours, some major messages come through loud and clear: