Mass Customization Is (Finally) The Future Of Products

JP Gownder

Mass customization has been the “next big thing” in product strategy for a very long time. Theorists have been talking about it as the future of products since at least 1970, when Alvin Toffler presaged the concept. Important books from 1992 and 2000 further promoted the idea that mass customization was the future of products.

Yet for years, mass customization has disappointed. Some failures were due to execution: Levi Strauss, which sold customized jeans from 1993-2003, never offered consumers choice over a key product feature – color. In other cases, changing market conditions undermined the business model: Dell, once the most prominent practitioner of mass customization, failed spectacularly, reporting that the model had become “too complex and costly.”

Overall, the “next big thing” has remained an elusive strategy in the real world, keeping product strategists away in droves.

Read more

Egenera Lands HP As A Partner – A Win For Both

Richard Fichera

Egenera, arguably THE pioneer in what the industry is now calling converged infrastructure, has had a hard life. Early to market in 2000 with a solution that was approximately a decade ahead of its time, it offered an elegant abstraction of physical servers into what chief architect Maxim Smith described as “fungible and anonymous” resources connected by software defined virtual networks. Its interface was easy to use, allowing the definition of virtualized networks, NICs, servers with optional failover and pools of spare resources with a fluidity that has taken the rest of the industry almost 10 years to catch up to. Unfortunately this elegant presentation was chained to a completely proprietary hardware architecture, which encumbered the economics of x86 servers with an obsolete network fabric, expensive system controller and physical architecture (but it was the first vendor to include blue lights on its servers). The power of the PanManager software was enough to keep the company alive, but not enough to overcome the economics of the solution and put them on a fast revenue path, especially as emerging competitors began to offer partial equivalents at lower costs. The company is privately held and does not disclose revenues, but Forrester estimates it is still less than $100 M in annual revenues.

In approximately 2006, Egenera began the process of converting its product to a pure software offering capable of running on commodity server hardware and standard Ethernet switches. In subsequent years they have announced distribution arrangements with Fujitsu (an existing partner for their earlier products) and an OEM partnership with Dell, which apparently was not successful, since Dell subsequently purchased Scalent, an emerging software competitor. Despite this, Egenera claims that its software business is growing and has been a factor in the company’s first full year of profitability.

Read more

Please Join Our Landmark ITSM Study

Glenn O'Donnell

Shortly before the IT Service Management Forum's annual Fusion conference in 2009, Forrester and the US chapter of IT Service Management Forum (itSMF) put the finishing touches on a partnership agreement between the two entities. There are many aspects of this partnership, including Forrester analysts speaking at numerous itSMF events throughout the year. (I had the pleasure of speaking to and spending the day with the Washington, DC area's National Capital LIG just today!) The truly exciting aspect of the partnership, however, is our intent to perform some joint research on the ITSM movement. By combining Forrester's venerable research and analysis capabilities with the wide and diverse membership of itSMF our hope is to gain unprecedented insight into ITSM trends and sentiments. The beneficiaries will be everyone in the broad ITSM community! What a concept!

Sound the trumpets!

It took us a while to get everything lined up, but I'm delighted to announce that the research study is now live!

The study is open to all itSMF USA members, so we expect a large sample size for the research. That said, we encourage everyone to participate. The results will be tabulated by Forrester, who will perform the analysis and produce the research report on the findings. This report will be free to all itSMF USA members and Forrester clients. If you are neither, that's no problem. If you participate, you are eligible for a free copy, regardless of your affiliation. This is our way of thanking you for your help! Naturally, you will have to provide some contact information so we can send you your copy when it is ready.

Read more

The Empire Strikes Back – Intel Reveals An Effective Low-Power And Micro Server Strategy

Richard Fichera

A lot has been written about potential threats to Intel’s low-power server hegemony, including discussions of threats from not only its perennial minority rival AMD but also from emerging non-x86 technologies such as ARM servers. While these are real threats, with potential for disrupting Intel’s position in the low power and small form factor server segment if left unanswered, Intel’s management has not been asleep at the wheel. As part of the rollout of the new Sandy Bridge architecture, Intel recently disclosed their platform strategy for what they are defining as “Micro Servers,” small single-socket servers with shared power and cooling to improve density beyond the generally accepted dividing line of one server per RU that separates “standard density” from “high density.” While I think that Intel’s definition is a bit myopic, mostly serving to attach a label to a well established category, it is a useful tool for segmenting low-end servers and talking about the relevant workloads.

Intel’s strategy revolves around introducing successive generations of its Sandy Bridge and future architectures embodied as Low Power (LP) and Ultra Low Power (ULP) products with promises of up to 2.2X performance per watt and 30% less actual power compared to previous generation equivalent x86 servers, as outlined in the following chart from Intel:

So what does this mean for Infrastructure & Operations professionals interested in serving the target loads for micro servers, such as:

  • Basic content delivery and web servers
  • Low-end dedicated server hosting
  • Email and basic SaaS delivery
Read more

Dell Delivers vStart – Ready To Run Virtual Infrastructure

Richard Fichera

Another Tier-1 Converged Infrastructure Option

The drum continues to beat for converged infrastructure products, and Dell has given it the latest thump with the introduction of vStart, a pre-integrated environment for VMware. Best thought of as a competitor to VCE, the integrated VMware, Cisco and EMC virtualization stack, vStart combines:

  • Dell PowerEdge R610 and R710 rack servers
  • Dell EqualLogic PS6000XV storage
  • Dell PowerConnect Ethernet switches
  • Preinstalled VMware (trial) software & Dell management extensions
  • Dell factory and onsite services
Read more

Facebook Opens New Data Center – And Shares Its Technology

Richard Fichera

A Peek Behind The Wizard's Curtain

The world of hyper scale web properties has been shrouded in secrecy, with major players like Google and Amazon releasing only tantalizing dribbles of information about their infrastructure architecture and facilities, on the presumption that this information represented critical competitive IP. In one bold gesture, Facebook, which has certainly catapulted itself into the ranks of top-tier sites, has reversed that trend by simultaneously disclosing a wealth of information about the design of its new data center in rural Oregon and contributing much of the IP involving racks, servers, and power architecture to an open forum in the hopes of generating an ecosystem of suppliers to provide future equipment to themselves and other growing web companies.

The Data Center

By approaching the design of the data center as an integrated combination of servers for known workloads and the facilities themselves, Facebook has broken some new ground in data center architecture with its facility.

At a high level, a traditional enterprise DC has a utility transformer that feeds power to a centralized UPS, and then power is subsequently distributed through multiple levels of PDUs to the equipment racks. This is a reliable and flexible architecture, and one that has proven its worth in generations of commercial data centers. Unfortunately, in exchange for this flexibility and protection, it extracts a penalty of 6% to 7% of power even before it reaches the IT equipment.

Read more

Intel Ups The Ante At The High End With New E7 CPUs

Richard Fichera

Bigger, Better, Faster Xeon CPUs

Intel today publicly announced its anticipated “Westmere EX” high end Westmere architecture server CPU as the E7, now part of a new family nomenclature encompassing entry (E3), midrange (E5), and high-end server CPUs (E7), and at first glance it certainly looks like it delivers on the promise of the Westmere architecture with enhancements that will appeal to buyers of high-end x86 systems.

The E7 in a nutshell:

  • 32 nm CPU with up to 10 cores, each with hyper threading, for up to 20 threads per socket.
  •  Intel claims that the system-level performance will be up to 40% higher than the prior generation 8-core Nehalem EX. Notice that the per-core performance improvement is modest (although Intel does offer a SKU with 8 cores and a slightly higher clock rate for those desiring ultimate performance per thread).
  • Improvements in security with Intel Advanced Encryption Standard New Instruction (AES-NI) and Intel Trusted Execution Technology (Intel TXT).
  • Major improvements in power management by incorporating the power management capabilities from the Xeon 5600 CPUs, which include more aggressive P states, improved idle power operation, and the ability to separately reduce individual core power setting depending on workload, although to what extent this is supported on systems that do not incorporate Intel’s Node Manager software is not clear.
Read more

What Do Spark Plugs And WLAN Solutions Have In Common?

Andre Kindness

It’s not the most daring and cutting-edge prediction to say 2011 will be Wi-Fi’s second coming. However, you might be caught off guard when I tell you to not worry about a vendor’s WLAN architecture. Your business needs will flush out the right one. Despite the initial hype seven years ago that Wi-Fi was going to be the new edge, it’s been the second choice for most users to connect with at work — but that will change. A tidal wave of wireless devices will be crashing through the enterprise front door very soon. Just look at the carriers scrambling to build out their infrastructure — there’s no shortage of stories about AT&T and their build-out of Wi-Fi in metropolitan areas. And users have fused their work and personal phones and are looking to seek coverage from carrier data plans.

The time to start was yesterday, and you have a ton of work to do. Your edge will be servicing:

  • Employees with corporate netbooks and their own smartphones and/or tablets who watch training videos on YouTube from companies like VMware.
  • Devices like torque tools, temperature sensors in exothermic chambers, ambient light sensors, and a myriad other devices.
  • Contractors with their own laptops, netbooks, tablets, and/or smartphones who need access to specific company applications.
  • Guests like account executives entering customer information into their CRM programs.
  • All the things being developed at venture capital backed incubators.
Read more

Cisco Buys A Credible Automation Entry Point With NewScale

Glenn O'Donnell

Cisco announced today its intent to acquire NewScale, a small, but well-respected automation software vendor. The financial terms were not disclosed, but it is a small deal in terms of money spent. It is big in the sense that Cisco needed the kind of capabilities offered by NewScale, and NewScale has proven to be one of the most innovative and visible players in that market segment.

The market segment in question is what has been described as “the tip of the iceberg” for the advanced automation suites needed to create and operate cloud computing services. The “tip” refers to the part of the overall suite that is exposed to customers, while the majority of the “magic” of cloud automation is hidden from view – as it should be. The main capabilities offered by NewScale deal with building and managing the service catalog and providing a self-service front end that allows cloud consumers to request their own services based on this catalog of available services. Forrester has been bullish on these capabilities because they are the customer-facing side of cloud – the most important aspect – whereas most of the cloud focus has been directed at the “back end” technologies such as virtual server deployment and workload migration. These are certainly important, but a cloud is not a cloud unless the consumers of those services can trigger their deployment on their own. This is the true power of NewScale, one of the best in this sub-segment.

Read more

The Two Words You Need To Know To Turn On Cloud Economics

James Staten

Everyone understands that cloud computing provides pay per use access to resources and the ability to elastically scale up an application as its traffic increases. Those are values that turn on cloud economics, but how do you turn cloud economics to your advantage?

That was the topic of my keynote session at the Cloud Connect 2011 event in Santa Clara, Calif. earlier this month. The video of this keynote can now be viewed on the event website at http://tv.cloudconnectevent.com/. You will need to register (free) on the site. In this short -- six minute -- keynote you will get the answers to this question. I also encourage you to view many of the other keynotes from this same event, as this was the first cloud computing conference I have attended that finally moved beyond Cloud 101 content and provided a ton of great material on how to really take advantage of cloud computing. We still have a long way to go, but this is a great step forward for anyone still learning about the Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solutions and how they can empower your organization.

If you still aren't experimenting with these platforms, get going. While they won't transform the world, they do give you new deployment options that can accelerate time-to-market, increase deployment flexibility, and prepare you for the new economic model they are bringing to many early adopters today. 

Read more