Networks Are About The Users, Not The Apps!

Virtualization and cloud talk just woke the sleeping giant, networking. For too long, we were so isolated in our L2-L4 world and soundly sleeping as VMs were created and a distant cousin was born, vSwitches. Sure, we can do a little of this and little of that in this virtual world, but the reality is everything is very manually driven and a one-off process. For example, vendors talk about moving policies from one port to another when a VM moves, but they don’t discuss policies moving around automatically on links from edge switches to the distribution switches. Even management tools are scrambling to solve issues within the data center. In this game of catch, I’m hearing people banter the word “app” around. Server personnel to networking administrators are trying to relate to an app. Network management tools, traffic sensors, switches, wan optimization are being developed to measure, monitor, or report on the performance of apps in some form or another.

Why is “app” the common language? Why are networks relating to “apps”? With everything coming down the pike, we are designing for yesterday instead of tomorrow. Infrastructure and operations professionals will have deal with:

  • Web 2.0tools. Traditional apps can alienate users when language and customs aren’t designed into the enterprise apps, yet no one app can deal with sheer magnitude of languages. Web 2.0 technologies — such as social networking sites, blogs, wikis, video-sharing sites, hosted services, web applications, mashups, and folksonomies — connect people with each other globally to collaborate and share information, but in a way that is easily customized and localized. For example, mashups allow apps to be easily created in any language and data sourced from a variety of locations.
Read more

What Is The Cost Of Being Blind, Insecure, And Unmanaged?

With the increased presence of business principles within the IT arena, I get a lot of inquiries from  Infrastructure & Operations Professionals who are trying to figure out how to justify their investment in a particular product or solution in the security, monitoring, and management areas. Since most marketing personnel view this either as a waste of resources in a futile quest of achievement or too intimidating to even begin to tackle, IT vendors have not provided their customers more than marketing words:  lower TCO, more efficient, higher value, more secure, or more reliable. It’s a bummer since the request is a valid concern for any IT organization. Consider that other industries -- nuclear power plants, medical delivery systems, or air traffic control -- with complex products and services look at risk and reward all the time to justify their investments. They all use some form of probabilistic risk assessment (PRA) tools to figure out technological, financial, and programmatic risk by combining it with disaster costs: revenue losses, productivity losses, compliance and/or reporting penalties, penalties and loss of discounts, impact to customers and strategic partners, and impact to cash flow.

PRA teams use fault tree analysis(FTA) for top-down assessment and failure mode and effect analysis(FMEA) for bottom-up. 

Read more

Checking In On Linux – Latest Linux Releases Show Continued Progress

I’ve recently had the opportunity to talk with a small sample of SLES 11 and RH 6 Linux users, all developing their own applications. All were long-time Linux users, and two of them, one in travel services and one in financial services, had applications that can be described as both large and mission-critical.

The overall message is encouraging for Linux advocates, both the calm rational type as well as those who approach it with near-religious fervor. The latest releases from SUSE and Red Hat, both based on the 2.6.32 Linux kernel, show significant improvements in scalability and modest improvements in iso-configuration performance. One user reported that an application that previously had maxed out at 24 cores with SLES 10 was now nearing production certification with 48 cores under SLES 11. Performance scalability was reported as “not linear, but worth doing the upgrade.”

Overall memory scalability under Linux is still a question mark, since the widely available x86 platforms do not exceed 3 TB of memory, but initial reports from a user familiar with HP’s DL 980 verify that the new Linux Kernel can reliably manage at least 2TB of RAM under heavy load.

File system options continue to expand as well. The older Linux FS standard, ETX4, which can scale to “only” 16 TB, has been joined by additional options such as XFS (contributed by SGI), which has been implemented in several installations with file systems in excess of 100 TB, relieving a limitation that may have been more psychological than practical for most users.

Read more

Evaluating Complexity

We’re starting to get inquiries about complexity. Key questions are how to evaluate complexity in an IT organization and consequently how to evaluate its impact on availability and performance of applications. Evaluating complexity wouldn’t be like evaluating the maturity of IT processes, which is like fixing what’s broken, but more like preventive maintenance: understanding what’s going to break soon and taking action to prevent the failure.

Volume of application and services certainly has something to do with complexity. Watts Humphrey said that code size (in KLOC: thousands of lines of code) doubles every two years, certainly due to increase in hardware capacity and speed, and this is easily validated by the evolution of operating systems over the past years. It stands to reason that, as a consequence, the total number of errors in the code also doubles every two years.

But code is not the only cause of error: Change, configuration, and capacity are right there, too. Intuitively, the chance of an error in change and configuration would depend on the diversity of infrastructure components and on the volume of changes. Capacity issues would also be dependent on these parameters.

There is also a subjective aspect to complexity: I’m sure that my grandmother would have found an iPhone extremely complex, but my granddaughter finds it extremely simple. There are obviously human, cultural, and organizational factors in evaluating complexity.

Can we define a “complexity index,” should we turn to an evaluation model with all its subjectivity, or is the whole thing a wild goose chase?

Read more

Benchmark Your Job Responsibilities Against Peers

Consumer product strategists hold a wide variety of job titles: product manager, product development manager, services manager, or a variation of general manager, vice president, or even, sometimes, CEO or other C-level title. Despite these varying titles, many of you share a great number of job responsibilities with one another.

We recently fielded our Q4 Global Consumer Product Strategy Research Panel Online Survey to 256 consumer product strategy professionals from a wide variety of industries. Why do this? One reason was to better understand the job responsibilities that you, in your role, take on every day. But the other reason was to help you succeed: By benchmarking yourself against peers, you can identify new job responsibilities for growth, improve your effectiveness, and ultimately advance your career.

What did we find out? The bottom line is that consumer product strategy jobs are pretty tough. We found a wide range of skills are required to do the job well, since consumer product strategists are expected to:

  • Drive innovation. Consumer product strategists are front-and-center in driving innovation, which ideally suffuses the entire product life cycle. Being innovative is a tall task, but all of you are expected to be leaders here.
     
  • Think strategically... You've got to have a strategic view of your markets, identifying new concepts and business models and taking a long-term view of tomorrow's products.
     
  • ...but execute as a business person. While thinking strategically, you generally have to execute tactically as well. You're business unit owners. At the senior-most levels, you hold the P&L for the product or portfolio of products.
     
Read more

Apply A “Startup” Mentality To Your IT Infrastructure And Operations

Cash-starved. Fast-paced. Understaffed. Late nights. T-shirts. Jeans.

These descriptors are just as relevant to emerging tech startups as they are to the typical enterprise IT infrastructure and operations (I&O) department. And to improve customer focus and develop new skills, I&O professionals should apply a “startup” mentality.

A few weeks ago, I had the opportunity to spend time with Locately, a four-person Boston-based startup putting a unique spin on customer insights and analytics: Location. By having consumers opt-in to Locately’s mobile application, media companies and brands can understand how their customers spend their time and where they go. Layered with other contextual information – such as purchases, time, and property identifiers (e.g. store names, train stops) – marketers and strategists can drive revenues and awareness, for example, by optimizing their marketing and advertising tactics or retail store placement.

The purpose of my visit to Locately was not to write this blog post, at least not initially. It was to give the team of five Research Associates that I manage exposure to a different type of technology organization than they usually have access to – the emerging tech startup. Throughout our discussion with Locately, it struck me that I&O organizations share a number of similarities with startups. In particular, here are two entrepreneurial characteristics that I&O professionals should embody in their own organizations:

Read more

ScaleMP – Interesting Twist On Systems Scalability And Virtualization

I just spent some time talking to ScaleMP, an interesting niche player that provides a server virtualization solution. What is interesting about ScaleMP is that rather than splitting a single physical server into multiple VMs, they are the only successful offering (to the best of my knowledge) that allows I&O groups to scale up a collection of smaller servers to work as a larger SMP.

Others have tried and failed to deliver this kind of solution, but ScaleMP seems to have actually succeeded, with a claimed 200 customers and expectations of somewhere between 250 and 300 next year.

Their vSMP product comes in two flavors, one that allows a cluster of machines to look like a single system for purposes of management and maintenance while still running as independent cluster nodes, and one that glues the member systems together to appear as a single monolithic SMP.

Does it work? I haven’t been able to verify their claims with actual customers, but they have been selling for about five years, claim over 200 accounts, with a couple of dozen publicly referenced. All in all, probably too elaborate a front to maintain if there was really nothing there. The background of the principals and the technical details they were willing to share convinced me that they have a deep understanding of the low-level memory management, prefectching, and caching that would be needed to make a collection of systems function effectively as a single system image. Their smaller scale benchmarks displayed good scalability in the range of 4 – 8 systems, well short of their theoretical limits.

My quick take is that the software works, and bears investigation if you have an application that:

  1. Either is certified to run with ScaleMP (not many), or one where that you control the code.
  2. You understand the memory reference patterns of the application, and
Read more

Consider The Cloud As A Solution, Not A Problem

It’s rumored that the Ford Model T’s track dimension (the distance between the wheels of the same axle) could be traced from the Conestoga wagon to the Roman chariot by the ruts they created. Roman roads forced European coachbuilders to adapt their wagons to the Roman chariot track, a measurement they carried over when building wagons in America in the 19th and early 20th centuries. It’s said that Ford had no choice but to adapt his cars to the rural environment created by these wagons. This cycle was finally broken by paving the roads and freeing the car from the chariot legacy.

IT has also carried over a long legacy of habits and processes that contrast with the advanced technology that it uses. While many IT organizations are happy to manage 20 servers per administrator, some Internet service providers are managing 1 or 2 million servers and achieving ratios of 1 administrator per 2000 servers. The problem is not how to use the cloud to gain 80% savings in data center costs, the problem is how to multiply IT organizations’ productivity by a factor of 100. In other words, don’t try the Model T approach of adapting the car to the old roads; think about building new roads so you can take full advantage of the new technology.

Gains in productivity come from technology improvements and economy of scale. The economy of scale is what the cloud is all about: cookie cutter servers using virtualization as a computing platform, for example. The technology advancement that paves the road to economy of scale is automation. Automation is what will abstract diversity and mask the management differences between proprietary and commodity platforms and eventually make the economy of scale possible.

Read more

Finally Settling Into The World Of Storage

I recently joined the Forrester Infrastructure and Operations team, and I'm excited to be working the team to further explore the changing world of storage.  I know... many said "Storage? How boring." But in fact, there have been some very exciting changes in storage that have emerged as the result of many other transformations happening in the IT environment, that directly or indirectly impact storage. Some of the larger changes include:

Converged infrastructure: Emerging solutions that tie networking, storage and compute together have impacted the way storage further interacts and integrates with the other components of this stack. As Andre Kindness (@andrekindness) addresses in his doc here, the convergence occurring in the network are impacting the way storage considerations must be made and deployed going forward.

Cloud: Although much hyped, cloud computing is real and happening. There's no need to delve deeper for now, my colleague James Staten (@staten7) covers this topic extensively and can find his blog here. Many components of this model have evolved, yet cloud storage in its infancy. Use cases are still limited, as Andrew Reichman (@reichmanIT) points out in his August doc. However, I do see the market evolving quickly, as enterprises begin to get more comfortable and realistic about their expectations. 

Read more

Dell Wraps Up Compellent, Refocuses Their Storage Ambitions On The Midrange

This week saw one of the last remaining independent storage vendors (Compellent) get swallowed by one of the IT infrastructure mega-vendors (Dell) in an ongoing drive for comprehensive solution sets.  We’ve seen a great deal of industry consolidation in storage, as vendors want to offer broad solution sets to buyers that want a single throat to choke, more financial stability than little guys can offer, and better integration of solutions. 

In the best of breed world, standing up an application environment generally meant figuring out how to make products from 5 to 10 different vendors work together; doing integration testing on the customer's floor as if such a solution had never been attempted before.  The integrated mega-vendor approach aims to smooth out the connections between server, storage, OS, virtualization, SAN, LAN, NIC, HBA, facilities, and the process and professional services aspects that go into a successful IT environment.  In the end, all this integration by pure infrastructure players may not be able to compete with the application vendors who are increasingly moving towards proprietary hardware stacks built around their apps (Oracle/Sun) or advanced control of infrastructure solutions from partners that are designed to closely integrate with the apps (VMware, Microsoft).  In the end, the app vendors have a theoretical advantage as they own the stickiest piece of the IT puzzle and therefore have closer access to the context of application data that can be used to make decisions about tiering, archiving and data classifications that infrastructure vendors of all stripes have struggled with.  In practice though, the infrastructure vendors have a huge lead in terms of user relationships for hardware purchasing, and more mature capabilities for performance, data protection and integrated management tools.

Read more