Vendors Must Modify Strategies To Reach New Segments Of Mobile Workers

Michele Pelino

Vendors in the mobility ecosystem are dramatically underestimating the demand for mobility solutions in the corporate arena. Why? Because they are missing demand that will come from two emerging segments of employees: Mobile Wannabes and Mobile Mavericks. When combined, these two worker segments account for 22% of all employees today, but by 2015 they will grow significantly to 42% of all corporate employees. To identify the needs of these mobile workers, Forrester analyzed results from the Forrsights Workforce Employee Survey, Q3 2010, which was fielded to over 5,500 employees in Canada, France, Germany, the UK, and the US and captures their smartphone device usage, purchasing behavior, and mobile application adoption.

Mobile Wannabe employees work in desk jobs at an office and do not get mobile devices from the corporate IT department, but they “want to” use their smartphone devices for work. Today, Mobile Wannabe workers account for 16% of all employees worldwide; however, by 2015, this segment will account for nearly 30% of all employees. Wannabe worker roles include executive assistants, clerical personnel, human resource workers, and customer service representatives. Momentum in this segment is driven by Millennial workers who grew up having easy access to personal computers and mobile phones and often purchase smartphones prior to entering the workforce.

Read more

Why Product Strategists Should Embrace Conjoint Analysis

JP Gownder

Aside from my work with product strategists, I’m also a quant geek. For much of my career, I’ve written surveys (to study both consumers and businesses) to delve deeply into demand-side behaviors, attitudes, and needs. For my first couple of years at Forrester, I actually spent 100% of my time helping clients with custom research projects that employed data and advanced analytics to help drive their business strategies.

These days, I use those quantitative research tools to help product strategists build winning product strategies. I have two favorite analytical approaches: my second favorite is segmentation analysis, which is an important tool for product strategists. But my very favorite tool for product strategists is conjoint analysis. If you, as a product strategist, don’t currently use conjoint, I’d like you to spend some time learning about it.

Why? Because conjoint analysis should be in every product strategist’s toolkit. Also known as feature tradeoff analysis or discrete choice, conjoint analysis can help you choose the right features for a product, determine which features will drive demand, and model pricing for the product in a very sophisticated way. It’s the gold standard for price elasticity analysis, and it offers extremely actionable advice on product design.  It helps address each of “the four Ps” that inform product strategies.

Read more

ARM-Based Servers – Looming Tsunami Or Just A Ripple In The Industry Pond?

Richard Fichera

From nothing more than an outlandish speculation, the prospects for a new entrant into the volume Linux and Windows server space have suddenly become much more concrete, culminating in an immense buzz at CES as numerous players, including NVIDIA and Microsoft, stoked the fires with innuendo, announcements, and demos.

Consumers of x86 servers are always on the lookout for faster, cheaper, and more power-efficient servers. In the event that they can’t get all three, the combination of cheaper and more energy-efficient seems to be attractive to a large enough chunk of the market to have motivated Intel, AMD, and all their system partners to develop low-power chips and servers designed for high density compute and web/cloud environments. Up until now the debate was Intel versus AMD, and low power meant a CPU with four cores and a power dissipation of 35 – 65 Watts.

The Promised Land

The performance trajectory of processors that were formerly purely mobile device processors, notably the ARM Cortex, has suddenly introduced a new potential option into the collective industry mindset. But is this even a reasonable proposition, and if so, what does it take for it to become a reality?

Our first item of business is to figure out whether or not it even makes sense to think about these CPUs as server processors. My quick take is yes, with some caveats. The latest ARM offering is the Cortex A9, with vendors offering dual core products at up to 1.2 GHz currently (the architecture claims scalability to four cores and 2 GHz). It draws approximately 2W, much less than any single core x86 CPU, and a multi-core version should be able to execute any reasonable web workload. Coupled with the promise of embedded GPUs, the notion of a server that consumes much less power than even the lowest power x86 begins to look attractive. But…

Read more

NetApp Acquires Akorri – Moving Up The Virtualization Stack

Richard Fichera

NetApp recently announced that it was acquiring Akorri, a small but highly regarded provider of management solutions for virtualized storage environments. All in all, this is yet another sign of the increasingly strategic importance of virtualized infrastructure and the need for existing players, regardless of how strong their positions are in their respective silos, to acquire additional tools and capabilities for management of an extended virtualized environment.

NetApp, while one of the strongest suppliers in the storage industry, not only faces continued pressure from not only EMC, which owns VMware and has been on a management software acquisition binge for years, but also renewed pressure from IBM and HP, who are increasingly tying their captive storage offerings into their own integrated virtualized infrastructure offerings. This tighter coupling of proprietary technology, while not explicitly disenfranchising external storage vendors, will still tighten the screws slightly and reduce the number of opportunities for NetApp to partner with them. Even Dell, long regarded as the laggard in high-end enterprise presence, has been ramping up its investment management and ability to deliver integrated infrastructure, including both the purchase of storage technology and a very clear signal with its run at 3Par and recent investments in companies such as Scalent (see my previous blog on Dell as an enterprise player and my colleague Andrew Reichman’s discussion of the 3Par acquisition) that it wants to go even further as a supplier of integrated infrastructure.

Read more

How Complexity Spilled The Oil

Jean-Pierre Garbani

The Gulf oil spill of April 2010 was an unprecedented disaster. The National Oil Spill Commission’s report summary shows that this could have been prevented with the use of better technology. For example, while the Commission agrees that the monitoring systems used on the platform provided the right data, it points out that the solution used relied on engineers to make sense of that data and correlate the right elements to detect anomalies. “More sophisticated, automated alarms and algorithms” could have been used to create meaningful alerts and maybe prevent the explosion. The Commission’s report shows that the reporting systems used have not kept pace with the increased complexity of drilling platforms. Another conclusion is even more disturbing, as it points out that these deficiencies are not uncommon and that other drilling platforms in the Gulf of Mexico face similar challenges.

If we substitute “drilling platform” with “data center,” this sound awfully familiar. How many IT organizations are relying on relatively simple data collection coming from point monitoring such as network, server, or application while trying to manage the performance and availability of increasingly complex applications? IT operations engineers sift through mountains of data from different sources trying to make sense of what is happening and usually fall short of finding meaningful alerts. The consequences may not be as dire as the Gulf oil spill, but they can still translate into lost productivity and revenue.

The fact that many IT operations have not (yet) faced a meltdown is not a valid counterargument: There is, for example, a good reason to purchase hurricane insurance when one lives in Florida, even though destructive storms are not that common. Like the weather, there are so many variables at play in today’s business services that mere humans can’t be expected to make sense of it.

Read more

Intel Announces Sandy Bridge. A Big Deal? You Bet!

Richard Fichera

Intel today officially announced the first products based on the much-discussed Sandy Bridge CPU architecture, and first impressions are highly favorable, with my take being that Sandy Bridge represents the first step in a very aggressive product road map for Intel in 2011.

Sandy Bridge is the next architectural spin after Intel’s Westmere shrink of the predecessor Nehalem architecture (the “tick” in Intel’s famous “tick-tock” progression of architectural changes followed by process shrink) and incorporates some major innovations compared to the previous architecture:

  • Minor but in toto significant changes to many aspects of the low-level microarchitecture – more registers, better prefetch, changes to the way instructions and operands are decode, cached and written back to registers and cache.
  • Major changes in integration of functions on the CPU die – Almost all major subsystems, including CPU, memory controller, graphics controller and PCIe controller, are now integrated onto the same die, along with the ability to share data with much lower latency than in previous generations. In addition to more efficient data sharing, this level of integration allows for better power efficiency.
  • Improvements to media processing – A dedicated video transcoding engine and an extended vector instruction set for media and floating point calculations improves Sandy Bridge capabilities in several major application domains.
Read more

Networks Are About The Users, Not The Apps!

Andre Kindness

Virtualization and cloud talk just woke the sleeping giant, networking. For too long, we were so isolated in our L2-L4 world and soundly sleeping as VMs were created and a distant cousin was born, vSwitches. Sure, we can do a little of this and little of that in this virtual world, but the reality is everything is very manually driven and a one-off process. For example, vendors talk about moving policies from one port to another when a VM moves, but they don’t discuss policies moving around automatically on links from edge switches to the distribution switches. Even management tools are scrambling to solve issues within the data center. In this game of catch, I’m hearing people banter the word “app” around. Server personnel to networking administrators are trying to relate to an app. Network management tools, traffic sensors, switches, wan optimization are being developed to measure, monitor, or report on the performance of apps in some form or another.

Why is “app” the common language? Why are networks relating to “apps”? With everything coming down the pike, we are designing for yesterday instead of tomorrow. Infrastructure and operations professionals will have deal with:

  • Web 2.0tools. Traditional apps can alienate users when language and customs aren’t designed into the enterprise apps, yet no one app can deal with sheer magnitude of languages. Web 2.0 technologies — such as social networking sites, blogs, wikis, video-sharing sites, hosted services, web applications, mashups, and folksonomies — connect people with each other globally to collaborate and share information, but in a way that is easily customized and localized. For example, mashups allow apps to be easily created in any language and data sourced from a variety of locations.
Read more

What Is The Cost Of Being Blind, Insecure, And Unmanaged?

Andre Kindness

With the increased presence of business principles within the IT arena, I get a lot of inquiries from  Infrastructure & Operations Professionals who are trying to figure out how to justify their investment in a particular product or solution in the security, monitoring, and management areas. Since most marketing personnel view this either as a waste of resources in a futile quest of achievement or too intimidating to even begin to tackle, IT vendors have not provided their customers more than marketing words:  lower TCO, more efficient, higher value, more secure, or more reliable. It’s a bummer since the request is a valid concern for any IT organization. Consider that other industries -- nuclear power plants, medical delivery systems, or air traffic control -- with complex products and services look at risk and reward all the time to justify their investments. They all use some form of probabilistic risk assessment (PRA) tools to figure out technological, financial, and programmatic risk by combining it with disaster costs: revenue losses, productivity losses, compliance and/or reporting penalties, penalties and loss of discounts, impact to customers and strategic partners, and impact to cash flow.

PRA teams use fault tree analysis(FTA) for top-down assessment and failure mode and effect analysis(FMEA) for bottom-up. 

Read more

Checking In On Linux – Latest Linux Releases Show Continued Progress

Richard Fichera

I’ve recently had the opportunity to talk with a small sample of SLES 11 and RH 6 Linux users, all developing their own applications. All were long-time Linux users, and two of them, one in travel services and one in financial services, had applications that can be described as both large and mission-critical.

The overall message is encouraging for Linux advocates, both the calm rational type as well as those who approach it with near-religious fervor. The latest releases from SUSE and Red Hat, both based on the 2.6.32 Linux kernel, show significant improvements in scalability and modest improvements in iso-configuration performance. One user reported that an application that previously had maxed out at 24 cores with SLES 10 was now nearing production certification with 48 cores under SLES 11. Performance scalability was reported as “not linear, but worth doing the upgrade.”

Overall memory scalability under Linux is still a question mark, since the widely available x86 platforms do not exceed 3 TB of memory, but initial reports from a user familiar with HP’s DL 980 verify that the new Linux Kernel can reliably manage at least 2TB of RAM under heavy load.

File system options continue to expand as well. The older Linux FS standard, ETX4, which can scale to “only” 16 TB, has been joined by additional options such as XFS (contributed by SGI), which has been implemented in several installations with file systems in excess of 100 TB, relieving a limitation that may have been more psychological than practical for most users.

Read more

Evaluating Complexity

Jean-Pierre Garbani

We’re starting to get inquiries about complexity. Key questions are how to evaluate complexity in an IT organization and consequently how to evaluate its impact on availability and performance of applications. Evaluating complexity wouldn’t be like evaluating the maturity of IT processes, which is like fixing what’s broken, but more like preventive maintenance: understanding what’s going to break soon and taking action to prevent the failure.

Volume of application and services certainly has something to do with complexity. Watts Humphrey said that code size (in KLOC: thousands of lines of code) doubles every two years, certainly due to increase in hardware capacity and speed, and this is easily validated by the evolution of operating systems over the past years. It stands to reason that, as a consequence, the total number of errors in the code also doubles every two years.

But code is not the only cause of error: Change, configuration, and capacity are right there, too. Intuitively, the chance of an error in change and configuration would depend on the diversity of infrastructure components and on the volume of changes. Capacity issues would also be dependent on these parameters.

There is also a subjective aspect to complexity: I’m sure that my grandmother would have found an iPhone extremely complex, but my granddaughter finds it extremely simple. There are obviously human, cultural, and organizational factors in evaluating complexity.

Can we define a “complexity index,” should we turn to an evaluation model with all its subjectivity, or is the whole thing a wild goose chase?

Read more