Mass customization has been the “next big thing” in product strategy for a very long time. Theorists have been talking about it as the future of products since at least 1970, when Alvin Toffler presaged the concept. Important books from 1992 and 2000 further promoted the idea that mass customization was the future of products.
Yet for years, mass customization has disappointed. Some failures were due to execution: Levi Strauss, which sold customized jeans from 1993-2003, never offered consumers choice over a key product feature – color. In other cases, changing market conditions undermined the business model: Dell, once the most prominent practitioner of mass customization, failed spectacularly, reporting that the model had become “too complex and costly.”
Overall, the “next big thing” has remained an elusive strategy in the real world, keeping product strategists away in droves.
Egenera, arguably THE pioneer in what the industry is now calling converged infrastructure, has had a hard life. Early to market in 2000 with a solution that was approximately a decade ahead of its time, it offered an elegant abstraction of physical servers into what chief architect Maxim Smith described as “fungible and anonymous” resources connected by software defined virtual networks. Its interface was easy to use, allowing the definition of virtualized networks, NICs, servers with optional failover and pools of spare resources with a fluidity that has taken the rest of the industry almost 10 years to catch up to. Unfortunately this elegant presentation was chained to a completely proprietary hardware architecture, which encumbered the economics of x86 servers with an obsolete network fabric, expensive system controller and physical architecture (but it was the first vendor to include blue lights on its servers). The power of the PanManager software was enough to keep the company alive, but not enough to overcome the economics of the solution and put them on a fast revenue path, especially as emerging competitors began to offer partial equivalents at lower costs. The company is privately held and does not disclose revenues, but Forrester estimates it is still less than $100 M in annual revenues.
In approximately 2006, Egenera began the process of converting its product to a pure software offering capable of running on commodity server hardware and standard Ethernet switches. In subsequent years they have announced distribution arrangements with Fujitsu (an existing partner for their earlier products) and an OEM partnership with Dell, which apparently was not successful, since Dell subsequently purchased Scalent, an emerging software competitor. Despite this, Egenera claims that its software business is growing and has been a factor in the company’s first full year of profitability.
They say "content is king." But, "context is kingier" when it comes to designing great smartphone and tablet mobile apps. Don't make the mistake of thinking that mobile app design is just about a smaller screen size or choosing the right development technology. Content and context are both important to designing great user experiences, but mobile amplifies context on five critical dimensions: location, locomotion, immediacy, intimacy, and device. Understand each dimension of Forrester's mobile context to design mobile apps that will make your users say "I love this app!"
Forrester LLIID: Location, Locomotion, Immediacy, Intimacy, And Device
Location. People use apps in an unlimited number of locations. And not all places are the same. A user may be in a quiet movie theater, at home in the kitchen, on a train, or in the White Mountain National Forest. Contrast this with desktop computers, stuck in places such as an office cubicle, home office, or kitchen. Laptops provide some mobility but are larger and less able to provide the immediate access of instant-on mobile devices such as smartphones, eReaders, and tablets. Location is a key dimension of context, driving different needs for users depending on where they are. Fortunately, GPS-equipped smartphones can use a geodatabase such as Google Maps to determine precise location.
Shortly before the IT Service Management Forum's annual Fusion conference in 2009, Forrester and the US chapter of IT Service Management Forum (itSMF) put the finishing touches on a partnership agreement between the two entities. There are many aspects of this partnership, including Forrester analysts speaking at numerous itSMF events throughout the year. (I had the pleasure of speaking to and spending the day with the Washington, DC area's National Capital LIG just today!) The truly exciting aspect of the partnership, however, is our intent to perform some joint research on the ITSM movement. By combining Forrester's venerable research and analysis capabilities with the wide and diverse membership of itSMF our hope is to gain unprecedented insight into ITSM trends and sentiments. The beneficiaries will be everyone in the broad ITSM community! What a concept!
The study is open to all itSMF USA members, so we expect a large sample size for the research. That said, we encourage everyone to participate. The results will be tabulated by Forrester, who will perform the analysis and produce the research report on the findings. This report will be free to all itSMF USA members and Forrester clients. If you are neither, that's no problem. If you participate, you are eligible for a free copy, regardless of your affiliation. This is our way of thanking you for your help! Naturally, you will have to provide some contact information so we can send you your copy when it is ready.
Following the Charles Darwin's statement “To change is difficult. Not to change is fatal,” application development professionals always seem to be in the process of change. Changing technology, adopting new practices and methods, and introducing new organizations are just a few of the things that most application delivery shops are changing at any time. But how is that change being undertaken? How do these change programs get managed? And more importantly, how does all this change fit together? During 2010, Diego Lo Giudice and I asked these very questions and have recently published a report describing our findings. This blog describes our findings at a high level:
Where does change come from?
Having interviewed 75 companies that were in the midst of change, we found that, perhaps not surprisingly, change was being driven from a multitude of sources. Bottom-up change was coming from practitioners. Technology Populism seems to be rampant in most companies, with individuals bringing in their favorite technology or practice. In parallel with this grass-roots adoption model is top-down, executive-driven change. The majority of organizations we interviewed had executive-driven change programs around process, organization, or technology. Rarely did we see the bottom-up change being integrated with the funded, top-down approaches. The result was often confusing, with ideas clashing and change initiatives running against each other. At best, the practitioners ignored the top-down ideas; at worst, they blatantly undermined these ideas because they were different from their own ideas.
Both Agile and Lean have an ethos that, at least on paper, acknowledges the noble failure. "Fail fast" is part of the Agile credo. Although it sounds as though it contradicts the "fail fast" approach, Lean's admonition to delay decisions for as long as possible is actually very complementary. The first draft of anything, from automotive design to software architecture to student papers, always contains elements that could be improved or that are just flat-out wrong. Practices that sit beneath the banner of Agile or Lean, such as set-based development, provide further ways to make mistakes and overcome them.
To a large extent, these practices deal with the easier varieties of failure. Prototyping a feature quickly, so that you can invite feedback when you actually have enough time to respond to it, is an extremely valuable technique for lowering the risk that you build something the wrong way. You need a different approach to identifying the features that you should not build, period.
I was recently taken to task in the twittersphere (I post as @reichmanIT) by @Knieriemen, a VP at virtualization and storage reseller Chi Corporation and the host of the Infosmack podcast. Mr. Knieriemen took exception to statements about storage single sourcing I made to the press related to a recent document about storage choices for virtual server environments, including the following:
RT @markwojtasiak: Forrester says "single source when possible" <-what an incredibly dumb thing for @reichmanIT to say
He followed up his comments with the following clarifications:
@reichmanIT In context to managing data center costs, single sourcing is probably the worst suggestion I've heard from an analyst
@reichmanIT In a virtual server environment, storage is a commodity and should be purchased as such to control costs
I don’t know Mr. Knieriemen personally, and I must admit that I was taken aback by his blunt approach, but I’m from New York and have a thick skin, so I can look past that. The reason I bring this up on Forrester’s blog is solely to take a look at the actual points of view brought out in the exchange. The crux of the argument is whether or not storage is commoditized, and in my opinion, it’s not (yet anyway). There are three reasons why:
A lot has been written about potential threats to Intel’s low-power server hegemony, including discussions of threats from not only its perennial minority rival AMD but also from emerging non-x86 technologies such as ARM servers. While these are real threats, with potential for disrupting Intel’s position in the low power and small form factor server segment if left unanswered, Intel’s management has not been asleep at the wheel. As part of the rollout of the new Sandy Bridge architecture, Intel recently disclosed their platform strategy for what they are defining as “Micro Servers,” small single-socket servers with shared power and cooling to improve density beyond the generally accepted dividing line of one server per RU that separates “standard density” from “high density.” While I think that Intel’s definition is a bit myopic, mostly serving to attach a label to a well established category, it is a useful tool for segmenting low-end servers and talking about the relevant workloads.
Intel’s strategy revolves around introducing successive generations of its Sandy Bridge and future architectures embodied as Low Power (LP) and Ultra Low Power (ULP) products with promises of up to 2.2X performance per watt and 30% less actual power compared to previous generation equivalent x86 servers, as outlined in the following chart from Intel:
So what does this mean for Infrastructure & Operations professionals interested in serving the target loads for micro servers, such as:
Twenty three years ago I arrived with a backpack and my best friend. Last week I went back. The city was as welcoming this time as it was the last, although the circumstances of my visit – and certainly my accommodations – were vastly different.
Pamplona is a city of about 200,000 inhabitants in Navarra, in the North of Spain. It is best known for the running of the bulls or, as it is known locally, the Festival of San Fermin, which many of us were first introduced to in Ernest Hemingway's The Sun Also Rises.
The bulls were not what brought me to the region this time (although they were the principal reason for my first visit). Last week I participated in e-NATECH, a tech industry forum organized by ATANA, an association of local ICT companies in Navarra. From what I saw in both the audience and across the city, Pamplona is clearly a front-runner in terms of ICT (and bulls as I recall from my first visit).
The drum continues to beat for converged infrastructure products, and Dell has given it the latest thump with the introduction of vStart, a pre-integrated environment for VMware. Best thought of as a competitor to VCE, the integrated VMware, Cisco and EMC virtualization stack, vStart combines: