Many CIOs, technical architects as infrastructure and operations (I&O) professionals in Chinese companies are struggling with the pressures of all kinds of business and IT initiatives as well as daily maintenance of system applications. At the same time they are trying to figure out what should be right approach for the company to adapt technology waves like cloud, enterprise mobility, etc., to survive in highly competitive market landscape. Among all the puzzles for the solution of strategic growth, Operating System (OS) migration might seem to have the lowest priority: business application enhancements deliver explicit business value, but it’s hard to justify changing operating systems when they work today. OS is the most fundamental infrastructure software that all other systems depend on, so the complexity and uncertainty of migrations is daunting. As a result, IT organizations in China usually tend to live with the existing OS as much as possible.
Take Microsoft Windows for example. Windows XP and Windows Server 2003 have been widely used on client side and server side. Very few companies have put Windows migration on its IT evolution roadmap. However, I believe the time is now for IT professionals in Chinese companies to seriously consider putting Windows upgrade into IT road map for the next 6 months for a couple of key reasons.
Windows XP and pirated OS won’t be viable much longer to support your business.
Ending support. Extended support, which includes security patches, ends April 8, 2014. Beyond that point, we could expect that more malwares or security attacks toward Windows XP would occur.
Many Indian CIOs and their infrastructure and operations (I&O) teams are in the market for a new data center as their existing data centers are running low on space, power, and cooling capacity. Forrester finds that data growth, virtualization, and consolidation are the main culprits behind these capacity challenges in India. For instance:
Data growth increases data center storage investments. Forrester estimates that storage consumes somewhere between 5% and 15% of the total power consumed in the data center and that the volume of data is growing by 30% to 50% per year.
Virtualization drives higher-density infrastructure architecture. Organizations face pressure to support more extreme compute densities and experiment with new infrastructure architectures.
Data center consolidation puts more pressure on centralized facilities. Per Forrester’s Forrsights Budgets and Priorities Survey, Q4 2012, consolidating IT infrastructure was a critical or high priority for nearly 70% of Indian IT decision-makers. This means more power, cooling, and space for centralized sites.
If you want to be the best in data center operations you are right to benchmark yourself against the cloud computing leaders – just don’t delude yourself into thinking you can match them.
In our latest research report, Rich Fichera and I updated a 2007 study that looked at what enterprise infrastructure leaders could learn from the best in the cloud and hosting market. We found that while they may have greater buying power, deeper IT R&D and huge security teams, many of their best practices apply to a standard enterprise data center – or at least part of it.
There are several key differences between you and the cloud leaders, many of which are detailed in the table below. Perhaps the starkest however is that for the clouds, they are the product. And that means they get budgetary priority and R&D attention that I&O leaders in the enterprise can only dream about.
According to CRN’s article on the event, Gelsinger was quoted as saying, “"We want to own corporate workloads. We all lose if they end up in these commodity public clouds. We want to extend our franchise from the private cloud into the public cloud and uniquely enable our customers with the benefits of both. Own the corporate workload now and forever."
Forgive my frankness, Mr. Gelsinger, but you just don’t get it. Public clouds are not your enemy. And the disruption they are causing to your forward revenues are not their capture of enterprise workloads. The battle lines you should be focusing on are between advanced virtualization and true cloud services and the future placement of Systems of Engagement versus Systems of Record.
When I returned to Forrester in mid-2010, one of the first blog posts I wrote was about Oracle’s new roadmap for SPARC and Solaris, catalyzed by numerous client inquiries and other interactions in which Oracle’s real level of commitment to future SPARC hardware was the topic of discussion. In most cases I could describe the customer mood as skeptical at best, and panicked and committed to migration off of SPARC and Solaris at worst. Nonetheless, after some time spent with Oracle management, I expressed my improved confidence in the new hardware team that Oracle had assembled and their new roadmap for SPARC processors after the successive debacles of the UltraSPARC-5 and Rock processors under Sun’s stewardship.
Two and a half years later, it is obvious that Oracle has delivered on its commitments regarding SPARC and is continuing its investments in SPARC CPU and system design as well as its Solaris OS technology. The latest evolution of SPARC technology, the SPARC T5 and the soon-to-be-announced M5, continue the evolution and design practices set forth by Oracle’s Rick Hetherington in 2010 — incremental evolution of a common set of SPARC cores, differentiation by variation of core count, threads and cache as opposed to fundamental architecture, and a reliable multi-year performance progression of cores and system scalability.
So what does VMware and EMC’s announcement of the new Pivotal Initiative mean for I&O leaders? Put simply, it means the leading virtualization vendor is staying focused on the data center — and that’s good news. As many wise men have said, the best strategy comes from knowing what NOT to do. In this case, that means NOT shifting focus too fast and too far afield to the cloud.
I think this is a great move, and makes all kinds of sense to protect VMware’s relationship with its core buyer, maintain focus on the datacenter, and lay the foundation for the vendor’s software-defined data center strategy. This move helps to end the cloud-washing that’s confused customers for years: There’s a lot of work left to do to virtualize the entire data center stack, from compute to storage and network and apps, and the easy apps, by now, have mostly been virtualized. The remaining workloads enterprises seek to virtualize are much harder: They don’t naturally benefit from consolidation savings, they are highly performance sensitive, and they are much more complex.
On Tuesday, September 4, Microsoft made the official announcement of Windows Server 2012, ending what has seemed like an interminable sequence of rumors, Beta releases, and endless speculation about this successor to Windows Server 2008.
So, is it worth the wait and does it live up to its hype? All omens point to a resounding “YES.”
Make no mistake, this is a really major restructuring of the OS, and a major step-function in capabilities aligned with several major strategic trends for both Microsoft and the rest of the industry. While Microsoft’s high level message is centered on the cloud, and on the Windows Server 2012 features that make it a productive platform upon which both enterprises and service providers can build a cost-effective cloud, its features will be immensely valuable to a wide range of businesses.
What It Does
The reviewers guide for Windows Server 2012 is over 220 pages long, and the OS has at least 100 features that are worth noting, so a real exploration of the features of this OS is way beyond what I can do here. Nonetheless, we can look at several buckets of technology to get an understanding of the general capabilities. Also important to note is that while Microsoft has positioned this as a very cloud-friendly OS, almost all of these cloud-related features are also very useful to an enterprise IT environment.
New file system — Included in WS2012 is ReFS, a new file system designed to survive failures that would bring down or corrupt the previous NTFS file system (which is still available). Combined with improvements in cluster management and failover, this is a capability that will play across the entire user spectrum.
The long-rumored changing of the guard at VMware finally took place last week and with it came down a stubborn strategic stance that was a big client dis-satisfier. Out went the ex-Microsoft visionary who dreamed of delivering a new "cloud OS" that would replace Windows Server as the corporate standard and in came a pragmatic refocusing on infrastructure transformation that acknowledges the heterogeneous reality of today's data center.
Paul Maritz will move into a technology strategy role at EMC where he can focus on how the greater EMC company can raise its relevance with developers. Clearly, EMC needs developer influence and application-level expertise, and from a stronger, full-portfolio perspective. Here, his experience can be more greatly applied -- and we expect Paul to shine in this role. However, I wouldn't look to see him re-emerge as CEO of a new spin out of these assets. At heart, Paul is more a natural technologist and it's not clear all these assets would move out as one anyway.
Bridgekeeper: "What ... is your name?"
Traveler: "John Swainson of Dell."
Bridgekeeper: "What ... is your quest?"
Traveler: "Hey! That's not a bad idea!"
We suspect Dell's process was more methodical than that!
This acquisition was not a surprise, of course. All along, it has been obvious that Dell needed stronger assets in software as it continues on its quest to avoid the Gorge of Eternal Peril that is spanned by the Bridge of Death. When the company announced that John Swainson was joining to lead the newly formed software group, astute industry watchers knew the next steps would include an ambitious acquisition. We predicted such an acquisition would be one of Swainson's first moves, and after only four months on the job, indeed it was.
Earlier this week at its Discover customer event, HP announced a significant set of improvements to its already successful c-Class BladeSystem product line, which, despite continuing competitive pressure from IBM and the entry of Cisco into the market three years ago, still commands approximately 50% of the blade market. The significant components of this announcement fall into four major functional buckets – improved hardware, simplified and expanded storage features, new interconnects and I/O options, and serviceability enhancements. Among the highlights are:
Direct connection of HP 3PAR storage – One of the major drawbacks for block-mode storage with blades has always been the cost of the SAN to connect it to the blade enclosure. With the ability to connect an HP 3PAR storage array directly to the c-Class enclosure without any SAN components, HP has reduced both the cost and the complexity of storage for a wide class of applications that have storage requirements within the scope of a single storage array.
New blades – With this announcement, HP fills in the gaps in their blade portfolio, announcing a new Intel Xeon EN based BL-420 for entry requirements, an upgrade to the BL-465 to support the latest AMD 16-core Interlagos CPU, and the BL-660, a new single-width Xeon E5 based 4-socket blade. In addition, HP has expanded the capacity of the sidecar storage blade to 1.5 TB, enabling an 8-server and 12 TB + chassis configuration.