Containers. One of those nasty terms, like metadata (ok - maybe you had to move in the odd circles I did for that one to resonate), cloud, or big data. To some, the solution to every problem. To others, yet another unforgivable explosion of over-exuberant hype that should be ignored at all costs. And, like so many things, the truth lies somewhere in the middle.
Containers are an important component in broader efforts to transform the way in which an enterprise builds, tests, deploys, and scales its applications. Particularly, today, its customer-facing systems of engagement. But they're not the answer to every problem, and they don't replace all your virtual machines, mainframes, and other infrastructure.
Most enterprise CIOs, today, have probably heard of containers... or Docker. And, for most of you, there will be a group or individual inside your organisation loudly singing containers' praises. There will be an equally vocal group or individual, pointing to every factoid supporting their view that the container emperor has nothing on.
My latest Brief takes a look at some of the ways containers are being used, and argues that CIOs need to pay attention - now. That's not to say you should wholeheartedly embrace containers in everything you do. But you do need to ensure you're aware of their strengths, and track the rapid evolution in the underlying technologies. Some pieces are even beginnint to be standardised, between competing companies.
And, just to see if the metadata crowd are still reading... Z39.50!
As we embark in the era of “cloud first” being business as usual for operations, one of the acronyms flying aground the industry is SDDC or the Software Defined Data Center. The term, very familiar to me since starting with Forrester less than six months ago, has become an increasing topic of conversation with Forrester clients and vendors alike. It is germane to my first Forrester report “Infrastructure as Code, The Missing Element In The I&O Agenda”, where I discuss the changing role of I&O pros from building and managing physical hardware to abstracting configurations as code. The natural extension of this is the SDDC.
We believe that the SDDC is an evolving architectural and operational philosophy rather that simply a product that you purchase. It is rooted in a series of fundamental architectural constructs built on modular standards-based infrastructure, virtualization of and at all layers, with complete orchestration and automation.
The Forrester definition of the SDDC is:
A SDDC is an integrated abstraction model that defines a complete data center by means of a layer of software that presents the resources of the data center as pools of virtual and physical resources, and allows them to be composed into arbitrary user-defined services.
Dell today announced its new FX system architecture, and I am decidedly impressed.
Dell FX is a 2U flexible infrastructure building block that allows infrastructure architects to compose an application-appropriate server and storage infrastructure out of the following set of resources:
Multiple choices of server nodes, ranging from multi-core Atom to new Xeon E5 V3 servers. With configurations ranging from 2 to 16 server nodes per enclosure, there is pretty much a configuration point for most mainstream applications.
A novel flexible method of mapping disks from up to three optional disk modules, each with 16 drives - the mapping, controlled by the onboard management, allows each server to appear as if the disk is locally attached DASD, so no changes are needed in any software that thinks it is accessing local storage. A very slick evolution in storage provisioning.
A set of I/O aggregators for consolidating Ethernet and FC I/O from the enclosure.
All in all, an attractive and flexible packaging scheme for infrastructure that needs to be tailored to specific combinations of server, storage and network configurations. Probably an ideal platform to support the Nutanix software suite that Dell is reselling as well. My guess is that other system design groups are thinking along these lines, but this is now a pretty unique package, and merits attention from infrastructure architects.
A group of us just published an analysis of VMworld (Breaking Down VMworld), and I thought I’d take this opportunity to add some additional color to the analysis. The report is an excellent synthesis of our analysis, the work of a talented team of collaborators with my two cents thrown in as well, but I wanted to emphasize a few additional impressions, primarily around storage, converged infrastructure, and the overall tone of the show.
First, storage. If they ever need a new name for the show, they might consider “StorageWorld” – it seemed to me that just about every other booth on the show floor was about storage. Cloud storage, flash storage, hybrid storage, cheap storage, smart storage, object storage … you get the picture.[i] Reading about the hyper-growth of storage and the criticality of storage management to the overall operation of a virtualized environment does not drive the concept home in quite the same way as seeing 1000s of show attendees thronging the booths of the storage vendors, large and small, for days on end. Another leading indicator, IMHO, was the “edge of the show” booths, the cheaper booths on the edge of the floor, where smaller startups congregate, which was also well populated with new and small storage vendors – there is certainly no shortage of ambition and vision in the storage technology pipeline for the next few years.
There is a 14-dog race going on, with a goal to win the wallets of the enterprise for mobile security spend. When lined up in the starting blocks, the racers may all seem to have equal chances, but a few are better poised to cross the finish line first and bask in the glory of the winners' circle. Three of these technologies are the odds-on favorites to lead from start to finish, with the rest of the racers struggling to remain relevant.
Coming off the starting block with the "holeshot" are the mobile device management vendors. With huge engines of revenue, large customer counts, and first-mover advantage, this dog is the odds-on favorite to take the championship trophy. Mobile device management vendors are already expanding their technologies and products into security platforms to diversify their rapidly commoditized product offerings. The move is paying off for the biggest and toughest MDM participants in the race, giving them the early, and potentially insurmountable, lead.
Many CIOs, technical architects as infrastructure and operations (I&O) professionals in Chinese companies are struggling with the pressures of all kinds of business and IT initiatives as well as daily maintenance of system applications. At the same time they are trying to figure out what should be right approach for the company to adapt technology waves like cloud, enterprise mobility, etc., to survive in highly competitive market landscape. Among all the puzzles for the solution of strategic growth, Operating System (OS) migration might seem to have the lowest priority: business application enhancements deliver explicit business value, but it’s hard to justify changing operating systems when they work today. OS is the most fundamental infrastructure software that all other systems depend on, so the complexity and uncertainty of migrations is daunting. As a result, IT organizations in China usually tend to live with the existing OS as much as possible.
Take Microsoft Windows for example. Windows XP and Windows Server 2003 have been widely used on client side and server side. Very few companies have put Windows migration on its IT evolution roadmap. However, I believe the time is now for IT professionals in Chinese companies to seriously consider putting Windows upgrade into IT road map for the next 6 months for a couple of key reasons.
Windows XP and pirated OS won’t be viable much longer to support your business.
Ending support. Extended support, which includes security patches, ends April 8, 2014. Beyond that point, we could expect that more malwares or security attacks toward Windows XP would occur.
When I returned to Forrester in mid-2010, one of the first blog posts I wrote was about Oracle’s new roadmap for SPARC and Solaris, catalyzed by numerous client inquiries and other interactions in which Oracle’s real level of commitment to future SPARC hardware was the topic of discussion. In most cases I could describe the customer mood as skeptical at best, and panicked and committed to migration off of SPARC and Solaris at worst. Nonetheless, after some time spent with Oracle management, I expressed my improved confidence in the new hardware team that Oracle had assembled and their new roadmap for SPARC processors after the successive debacles of the UltraSPARC-5 and Rock processors under Sun’s stewardship.
Two and a half years later, it is obvious that Oracle has delivered on its commitments regarding SPARC and is continuing its investments in SPARC CPU and system design as well as its Solaris OS technology. The latest evolution of SPARC technology, the SPARC T5 and the soon-to-be-announced M5, continue the evolution and design practices set forth by Oracle’s Rick Hetherington in 2010 — incremental evolution of a common set of SPARC cores, differentiation by variation of core count, threads and cache as opposed to fundamental architecture, and a reliable multi-year performance progression of cores and system scalability.
So what does VMware and EMC’s announcement of the new Pivotal Initiative mean for I&O leaders? Put simply, it means the leading virtualization vendor is staying focused on the data center — and that’s good news. As many wise men have said, the best strategy comes from knowing what NOT to do. In this case, that means NOT shifting focus too fast and too far afield to the cloud.
I think this is a great move, and makes all kinds of sense to protect VMware’s relationship with its core buyer, maintain focus on the datacenter, and lay the foundation for the vendor’s software-defined data center strategy. This move helps to end the cloud-washing that’s confused customers for years: There’s a lot of work left to do to virtualize the entire data center stack, from compute to storage and network and apps, and the easy apps, by now, have mostly been virtualized. The remaining workloads enterprises seek to virtualize are much harder: They don’t naturally benefit from consolidation savings, they are highly performance sensitive, and they are much more complex.
On Tuesday, September 4, Microsoft made the official announcement of Windows Server 2012, ending what has seemed like an interminable sequence of rumors, Beta releases, and endless speculation about this successor to Windows Server 2008.
So, is it worth the wait and does it live up to its hype? All omens point to a resounding “YES.”
Make no mistake, this is a really major restructuring of the OS, and a major step-function in capabilities aligned with several major strategic trends for both Microsoft and the rest of the industry. While Microsoft’s high level message is centered on the cloud, and on the Windows Server 2012 features that make it a productive platform upon which both enterprises and service providers can build a cost-effective cloud, its features will be immensely valuable to a wide range of businesses.
What It Does
The reviewers guide for Windows Server 2012 is over 220 pages long, and the OS has at least 100 features that are worth noting, so a real exploration of the features of this OS is way beyond what I can do here. Nonetheless, we can look at several buckets of technology to get an understanding of the general capabilities. Also important to note is that while Microsoft has positioned this as a very cloud-friendly OS, almost all of these cloud-related features are also very useful to an enterprise IT environment.
New file system — Included in WS2012 is ReFS, a new file system designed to survive failures that would bring down or corrupt the previous NTFS file system (which is still available). Combined with improvements in cluster management and failover, this is a capability that will play across the entire user spectrum.
Bridgekeeper: "What ... is your name?"
Traveler: "John Swainson of Dell."
Bridgekeeper: "What ... is your quest?"
Traveler: "Hey! That's not a bad idea!"
We suspect Dell's process was more methodical than that!
This acquisition was not a surprise, of course. All along, it has been obvious that Dell needed stronger assets in software as it continues on its quest to avoid the Gorge of Eternal Peril that is spanned by the Bridge of Death. When the company announced that John Swainson was joining to lead the newly formed software group, astute industry watchers knew the next steps would include an ambitious acquisition. We predicted such an acquisition would be one of Swainson's first moves, and after only four months on the job, indeed it was.