A funny thing happened while we in IT were focused on ITIL, data center consolidation and standardization. The business went shopping for better technology solutions. We’ve been their go-to department for technology since the mainframe days and have been doing what they asked. When they wanted higher SLAs we invested in high availability solutions. When they asked for greater flexibility we empowered them with client-server, application servers and now virtual machines. All the while they have relentlessly badgered us for lower costs and greater efficiencies. And we’ve given it to them. And until recently, they seemed satisfied. Or so we thought.
Sure, we’ve tolerated the occasional SaaS application here and there. We’ve let them bring in Macs (just not too many of them) and we’ve even supported their pesky smart phones. But each time they came running to us for assistance when the technical support need grew too great.
I was listening to a briefing the other day and got swept up in a Western melodrama, set against the backdrop of Calamity Jane’s saloon in Deadwood Gulch, South Dakota, revolving around three major characters: helpless heroine (the customer); valiant hero (vendor A, riding a standards-based white horse); and the scoundrel villain (a competitor, riding the proprietary black stallion) (insert boo and hiss). Vendor A tries to evoke sympathy at his plight of not offering the latest features because he doesn’t have the same powers as the villain and has chosen to follow the morally correct path, which is filled with prolonged and undeserved suffering supporting standards-based functions. What poppycock! There is no such thing as good and evil in networking. If the vendors were reversed in positions, vendor A would be doing the same thing as its competitors. Every vendor has some type of special sauce to differentiate themselves. Anyway, it’s business, plain and simple; networking fundamentally needs proprietary and standards-based features. However, there’s a time and place for both.
With that in mind, I want to let you know that I’m a big proponent of standards-based networking. The use of open standards improves the choices that help you reduce risk, implement durable solutions, obtain flexibility, and benefit from quality. Ninety-plus percent of networking should be leveraging standard protocols, but to get to that point features need to go through three stages:
Last Friday (September 17), I published a case study of Microsoft's Windows and Office Communicator (now Microsoft Lync) teams' use of "productivity games." What are productivity games? Put simply, they are a series of games produced by a small group of defect testers to encourage rank-and-file Microsoft employees to put software through its paces before it is released to the public. As many technology product managers can attest, getting employees of your company to take time away from their tasks to run a program in development and report any problems can be a Sisyphean effort: Bug checking doesn't have the allure of being an exciting, sexy job -- but it happens to be necessary. It will come as a surprise, but since 2006, Microsoft has used five games to look for errors in Windows Vista, Windows 7, and Office Communicator; a sixth game -- Communicate Hope -- is currently in the field to test Microsoft Lync. Why so many games, you ask? Well, they work.
The most successful game Microsoft has launched to date is called the Language Quality Game. It was designed to get employees who could read languages other than English to check that the thousands of user interface translations Microsoft had made in Windows 7 were accurate. The game produced positive results on two dimensions: 4,500 Microsoft employees played the game, and this group had a total of 500,000 viewings of translated Windows 7 UI translations. Because the game went over so well, iterations of it have been used in Office Communicator and Exchange. And others at Microsoft are looking to use games to do other tasks: e.g., a group at Microsoft Office Labs has created a game called Ribbon Hero to encourage people to explore the functionality of the Office 2007 productivity suite.
Everyone’s using the term “sustainability.” And, I’ll admit I’m a little jaded. But, given that it’s around to stay for a while, let’s take a look at the term. What are the primary objectives of “sustainability” initiatives? Are they “green” – with an eye toward protecting the environment by reducing the effects of climate change? Are they economic – cost cutting, increasing efficiency? “Sustain” seems static, maintain the current state. But some are thinking about “sustainability” as a means of generating growth. A few weeks ago, I started an interesting discussion about “operational sustainability” with Rich Lechner, IBM Vice President for Energy and Environment. (I say started because it actually continued this week, and will likely continue further.)
“Sustain to grow” may seem like an oxymoron, but it’s not. First let’s think about efficiency. What does it mean to be more efficient? Efficiency to me is the goal to “do more with less” – improving the ratio of output to input. So you cut and improve productivity ratios that way. But what if you’ve cut as much as you can, and you still want to do more, to improve those ratios? How can you grow within the limits of the resources you have? Sustain resources, increase productivity or capacity – in whatever terms or measures of capacity you use. This translates into the objective behind “operational sustainability.” How do you improve operations or processes in order to improve outcomes, within the limits of available resources?
As some of you know, I am hopelessly addicted to golf. I can already hear you asking, “What does golf have to do with marathons, and what do marathons have to do with business processes?” Well, I’m glad you asked. Before becoming a golf addict, I was a runner – running 5Ks, 10Ks, and half marathons. My goal was to work my way up to a marathon. This is still my goal, but I learned a while ago that you can’t be a serious golfer and also be a serious runner – they both compete for long stretches of time on Saturday mornings (although I did have someone recommend that I combine the two into "marathon golf").
When I was a runner, I quickly learned that how you run a 5K or 10K is different from how you run a half-marathon. It seems obvious now, but when I trained for my first half marathon I didn’t realize how critical it was to hydrate all the way through and to also change your breathing technique. Ultimately, I found a training program that helped me get ready for my first race, and I ended up crossing the finish line in pretty good time and without killing myself.
Earlier this week, I sat in on a session at Oracle OpenWorld that highlighted the importance of scaling process governance as BPM initiatives expand throughout organizations. The session, titled “Rapid, Successful BPM Adoption,” laid out the key principles of process governance:
Establish standards for implementing process improvement projects.
Prioritize BPM projects so you work on the most achievable ones first.
Clearly define the roles and responsibilities of everyone involved in the BPM project.
Put someone in charge with authority to enforce process governance rules.
Establish a BPM center of excellence to ensure steps 1-4 are followed.
Oracle, at Oracle Open World, has released their 18th version of CRM On Demand. This product integrates their Market2Lead acquisition, made in May of this year. It closes the gap between marketing and sales – and unifies the end-to-end life-cycle management of leads, including their nurturing track. Marketing and sales managers now can share KPIs and understand how lead generation and nurturing activities directly affect the outcome of sales.
The solution enables multitouch point marketing campaigns to be designed and executed. You can create personalized microsites and landing pages. There are robust analytics to measure their effectiveness, as well as progressive profiling capabilities that allow the company to gather more information about a lead at every step of the marketing and sales cycle. Basically, it adds the full marketing automation capabilities to the product suite. And it's attractively priced compared to having to buy seats from a standalone marketing automation vendor to access these capabilities.
My take: It’s a feature hole that had to be plugged, and it's priced well for adoption.
Vendor managers in companies with Oracle applications may have heard a lot of talk about its next-generation applications over the last five years. Well, the news from Oracle’s customer event in San Francisco is that Fusion is almost here. Oracle is extensively demonstrating the product here at the event, early adopter customers are already in the implementation process, and Oracle intends to generally release it in the first quarter of next year.
Oracle hasn’t announced final pricing yet, but Steve Miranda, SVP of Oracle Application Development, confirmed that customers on maintenance will get a 1:1 exchange when they swap the product they own now for the Fusion equivalent. That is good news, although to be fair, my Oracle contacts had indicated this, off the record, all along.
The packaging into SKUs will mimic that of the current product set, to make the swap easier. I.e., the price list for HR will look like the PeopleSoft price list, CRM like Siebel, and so on. That makes some sense, but I wish Oracle had taken the opportunity to simplify the pricing so that there are fewer SKUs. For instance, Siebel's price list is over 20 pages long, and there's no clear link between the the items in the price list and the functionality you want to use. As a result, some customers buy modules by mistake, while others fail to buy ones they really need. Hopefully Fusion will provide a clearer audit trail between functionality and SKU.
I attended Intel Developer Forum (IDF) in San Francisco last week, one of the premier events for anyone interested in microprocessors, system technology, and of course, Intel itself. Among the many wonders on display, including high-end servers, desktops and laptops, and presentations related to everything Cloud, my attention was caught by a pair of small wonders – very compact, low power servers paradoxically targeted at some of the largest hyper-scale web-facing workloads. Despite being nominally targeted at an overlapping set of users and workloads, the two servers, the Dell “Viking” and the SeaMicro SM10000, represent a study in opposite design philosophies on how to address the problem of scaling infrastructure to address high-throughput web workloads. In this case, the two ends of the spectrum are adherence to an emerging standardized design and utilization of Intel’s reference architectures as a starting point versus a complete refactoring of the constituent parts of a server to maximize performance per watt and physical density.
Last week, I wrote a blog post summarizing the Day 1 opening keynotes at Forrester’s Security Forum. This week, I’d like to recap the Day 2 opening keynotes. The second or last day at any event is always a challenge; attendees are always tempted to leave early or to stay in their hotel rooms to get some work done or if the event is in Vegas, squeeze in some craps (my favorite) or drop a few coins in a nearby slot. Luckily, we held the event in Boston and the lobsters have nowhere to run, so most attendees were happy to stick around until the end of the day. Not only did we have great attendance on Day 2, but there was a palpable buzz in the air. The audience asked tough questions and no one was spared — Forrester analysts, industry guest speakers, and vendors. While the main topic of Day 1 seemed to focus on risk and overall strategy, governance, and oversight, Day 2 focused on coming up with the specifics — the specific plans, the specific policies. As Andrew Jaquith stated in his keynote, to provide better data security, “you don’t need more widgets, what you need is a plan.”
Below are some of the highlights from the Day 2 keynotes: