Dell Introduces FX system - the Shape of Infrastructure to Come?

Richard Fichera

Dell today announced its new FX system architecture, and I am decidedly impressed.

Dell FX is a 2U flexible infrastructure building block that allows infrastructure architects to compose an application-appropriate server and storage infrastructure out of the following set of resources:

  • Multiple choices of server nodes, ranging from multi-core Atom to new Xeon E5 V3 servers. With configurations ranging from 2 to 16 server nodes per enclosure, there is pretty much a configuration point for most mainstream applications.
  • A novel flexible method of mapping disks from up to three optional disk modules, each with 16 drives - the mapping, controlled by the onboard management, allows each server to appear as if the disk is locally attached DASD, so no changes are needed in any software that thinks it is accessing local storage. A very slick evolution in storage provisioning.
  • A set of I/O aggregators for consolidating Ethernet and FC I/O from the enclosure.

All in all, an attractive and flexible packaging scheme for infrastructure that needs to be tailored to specific combinations of server, storage and network configurations. Probably an ideal platform to support the Nutanix software suite that Dell is reselling as well. My guess is that other system design groups are thinking along these lines, but this is now a pretty unique package, and merits attention from infrastructure architects.

Forrester clients, I've published a Quick Take report on this, Quick Take: Dell's FX Architecture Holds Promise To Power Modern Services

VMworld – Reflections on a Transformational Event

Richard Fichera

A group of us just published an analysis of VMworld (Breaking Down VMworld), and I thought I’d take this opportunity to add some additional color to the analysis. The report is an excellent synthesis of our analysis, the work of a talented team of collaborators with my two cents thrown in as well, but I wanted to emphasize a few additional impressions, primarily around storage, converged infrastructure, and the  overall tone of the show.

First, storage. If they ever need a new name for the show, they might consider “StorageWorld” – it seemed to me that just about every other booth on the show floor was about storage. Cloud storage, flash storage, hybrid storage, cheap storage, smart storage, object storage … you get the picture.[i] Reading about the hyper-growth of storage and the criticality of storage management to the overall operation of a virtualized environment does not drive the concept home in quite the same way as seeing 1000s of show attendees thronging the booths of the storage vendors, large and small, for days on end. Another leading indicator, IMHO, was the “edge of the show” booths, the cheaper booths on the edge of the floor, where smaller startups congregate, which was also well populated with new and small storage vendors – there is certainly no shortage of ambition and vision in the storage technology pipeline for the next few years.

Read more

And They're Off . . . The Mobile Security Dog Race Has Begun!

Tyler Shields

There is a 14-dog race going on, with a goal to win the wallets of the enterprise for mobile security spend. When lined up in the starting blocks, the racers may all seem to have equal chances, but a few are better poised to cross the finish line first and bask in the glory of the winners' circle. Three of these technologies are the odds-on favorites to lead from start to finish, with the rest of the racers struggling to remain relevant.

Coming off the starting block with the "holeshot" are the mobile device management vendors. With huge engines of revenue, large customer counts, and first-mover advantage, this dog is the odds-on favorite to take the championship trophy. Mobile device management vendors are already expanding their technologies and products into security platforms to diversify their rapidly commoditized product offerings. The move is paying off for the biggest and toughest MDM participants in the race, giving them the early, and potentially insurmountable, lead.
Read more

Take a forward thinking, while pragmatic approach to Windows migration

Charlie Dai

Many CIOs, technical architects as infrastructure and operations (I&O) professionals in Chinese companies are struggling with the pressures of all kinds of business and IT initiatives as well as daily maintenance of system applications. At the same time they are trying to figure out what should be right approach for the company to adapt technology waves like cloud, enterprise mobility, etc., to survive in highly competitive market landscape.   Among all the puzzles for the solution of strategic growth, Operating System (OS) migration might seem to have the lowest priority:  business application enhancements deliver explicit business value, but it’s hard to justify changing operating systems when they work today. OS is the most fundamental infrastructure software that all other systems depend on, so the complexity and uncertainty of migrations is daunting. As a result, IT organizations in China usually tend to live with the existing OS as much as possible.

Take Microsoft Windows for example. Windows XP and Windows Server 2003 have been widely used on client side and server side.  Very few companies have put Windows migration on its IT evolution roadmap. However, I believe the time is now for IT professionals in Chinese companies to seriously consider putting Windows upgrade into IT road map for the next 6 months for a couple of key reasons. 

Windows XP and pirated OS won’t be viable much longer to support your business.

  • Ending support. Extended support, which includes security patches, ends April 8, 2014. Beyond that point, we could expect that more malwares or security attacks toward Windows XP would occur.
Read more

Oracle Delivers On SPARC Promises

Richard Fichera


When I returned to Forrester in mid-2010, one of the first blog posts I wrote was about Oracle’s new roadmap for SPARC and Solaris, catalyzed by numerous client inquiries and other interactions in which Oracle’s real level of commitment to future SPARC hardware was the topic of discussion. In most cases I could describe the customer mood as skeptical at best, and panicked and committed to migration off of SPARC and Solaris at worst. Nonetheless, after some time spent with Oracle management, I expressed my improved confidence in the new hardware team that Oracle had assembled and their new roadmap for SPARC processors after the successive debacles of the UltraSPARC-5 and Rock processors under Sun’s stewardship.

Two and a half years later, it is obvious that Oracle has delivered on its commitments regarding SPARC and is continuing its investments in SPARC CPU and system design as well as its Solaris OS technology. The latest evolution of SPARC technology, the SPARC T5 and the soon-to-be-announced M5, continue the evolution and design practices set forth by Oracle’s Rick Hetherington in 2010 — incremental evolution of a common set of SPARC cores, differentiation by variation of core count, threads and cache as opposed to fundamental architecture, and a reliable multi-year performance progression of cores and system scalability.

Geek Stuff – New SPARC Hardware

Read more

EMC And VMware Carve Out Pivotal: Good News For I&O Pros And The Virtualization Market

Dave Bartoletti

So what does VMware and EMC’s announcement of the new Pivotal Initiative mean for I&O leaders? Put simply, it means the leading virtualization vendor is staying focused on the data center — and that’s good news. As many wise men have said, the best strategy comes from knowing what NOT to do. In this case, that means NOT shifting focus too fast and too far afield to the cloud.

I think this is a great move, and makes all kinds of sense to protect VMware’s relationship with its core buyer, maintain focus on the datacenter, and lay the foundation for the vendor’s software-defined data center strategy. This move helps to end the cloud-washing that’s confused customers for years: There’s a lot of work left to do to virtualize the entire data center stack, from compute to storage and network and apps, and the easy apps, by now, have mostly been virtualized. The remaining workloads enterprises seek to virtualize are much harder: They don’t naturally benefit from consolidation savings, they are highly performance sensitive, and they are much more complex.

Read more

Microsoft Announces Windows Server 2012

Richard Fichera

The Event

On Tuesday, September 4, Microsoft made the official announcement of Windows Server 2012, ending what has seemed like an interminable sequence of rumors, Beta releases, and endless speculation about this successor to Windows Server 2008.

So, is it worth the wait and does it live up to its hype? All omens point to a resounding “YES.”

Make no mistake, this is a really major restructuring of the OS, and a major step-function in capabilities aligned with several major strategic trends for both Microsoft and the rest of the industry. While Microsoft’s high level message is centered on the cloud, and on the Windows Server 2012 features that make it a productive platform upon which both enterprises and service providers can build a cost-effective cloud, its features will be immensely valuable to a wide range of businesses.

What It Does

The reviewers guide for Windows Server 2012 is over 220 pages long, and the OS has at least 100 features that are worth noting, so a real exploration of the features of this OS is way beyond what I can do here. Nonetheless, we can look at several buckets of technology to get an understanding of the general capabilities. Also important to note is that while Microsoft has positioned this as a very cloud-friendly OS, almost all of these cloud-related features are also very useful to an enterprise IT environment.

  • New file system — Included in WS2012 is ReFS, a new file system designed to survive failures that would bring down or corrupt the previous NTFS file system (which is still available). Combined with improvements in cluster management and failover, this is a capability that will play across the entire user spectrum.
Read more

Dell Is On A Quest For Software

Glenn O'Donnell


One of the many hilarious scenes in Monty Python and the Holy Grail is the "Bridge of Death" sequence. This week's news that Dell plans to acquire Quest Software makes one think of a slight twist to this scene:

Bridgekeeper:   "What ... is your name?"
Traveler:           "John Swainson of Dell."
Bridgekeeper:   "What ... is your quest?"
Traveler:           "Hey! That's not a bad idea!"

We suspect Dell's process was more methodical than that!

This acquisition was not a surprise, of course. All along, it has been obvious that Dell needed stronger assets in software as it continues on its quest to avoid the Gorge of Eternal Peril that is spanned by the Bridge of Death. When the company announced that John Swainson was joining to lead the newly formed software group, astute industry watchers knew the next steps would include an ambitious acquisition. We predicted such an acquisition would be one of Swainson's first moves, and after only four months on the job, indeed it was.

Read more

HP Rolls Out BladeSystem Upgrades – Significant Improvements Aim To Fend Off IBM And Cisco

Richard Fichera


Earlier this week at its Discover customer event, HP announced a significant set of improvements to its already successful c-Class BladeSystem product line, which, despite continuing competitive pressure from IBM and the entry of Cisco into the market three years ago, still commands approximately 50% of the blade market. The significant components of this announcement fall into four major functional buckets – improved hardware, simplified and expanded storage features, new interconnects and I/O options, and serviceability enhancements. Among the highlights are:

  • Direct connection of HP 3PAR storage – One of the major drawbacks for block-mode storage with blades has always been the cost of the SAN to connect it to the blade enclosure. With the ability to connect an HP 3PAR storage array directly to the c-Class enclosure without any SAN components, HP has reduced both the cost and the complexity of storage for a wide class of applications that have storage requirements within the scope of a single storage array.
  • New blades – With this announcement, HP fills in the gaps in their blade portfolio, announcing a new Intel Xeon EN based BL-420 for entry requirements, an upgrade to the BL-465 to support the latest AMD 16-core Interlagos CPU, and the BL-660, a new single-width Xeon E5 based 4-socket blade. In addition, HP has expanded the capacity of the sidecar storage blade to 1.5 TB, enabling an 8-server and 12 TB + chassis configuration.
Read more

2 Big Shifts Taking Us To More Resource-Efficient Computing

Chris Mines

In the last couple of weeks, I finally put a couple of pieces together . . . the tech industry is pushing hard, down two parallel tracks, toward much more resource-efficient computing architectures.

Track 1: Integrated systems. Computer suppliers are putting hardware components (including compute, network, and storage) together with middleware and application software in pre-integrated packages. The manufacturers will do assembly and testing of these systems in their factories, rather than on the customer's site. And they will tailor the system — to a greater or lesser degree, depending on the system — to the characteristics of the workload(s) it will be running.

The idea is to use general-purpose components (microprocessors, memory, network buses, and the like) to create special-purpose systems on a mass-customization basis. This trend has been evident for a while in the Oracle Exadata and Cisco UCS systems; IBM's Pure systems introductions push it even further into pre-configured applications and systems management.

Track 2. Modular data centers. Now, zoom out from individual computing systems to aggregations of those systems into data centers. And again, assemble as much of the componentry as possible in the factory rather than on-site. Vendors like Schneider, Emerson, and the systems shops like IBM and HP are creating a design approach and infrastructure systems that will allow data centers to be designed in modular fashion, with much of the equipment like air handling and power trucked to the customer's site, set up in the parking lot, and quickly turned on.

Read more