Intel Lays Out Future Data Center Strategy - Serious Focus on Emerging Opportunities

Richard Fichera

Yesterday Intel had a major press and analyst event in San Francisco to talk about their vision for the future of the data center, anchored on what has become in many eyes the virtuous cycle of future infrastructure demand – mobile devices and “the Internet of things” driving cloud resource consumption, which in turn spews out big data which spawns storage and the requirement for yet more computing to analyze it. As usual with these kinds of events from Intel, it was long on serious vision, and strong on strategic positioning but a bit parsimonious on actual future product information with a couple of interesting exceptions.

Content and Core Topics:

No major surprises on the underlying demand-side drivers. The the proliferation of mobile device, the impending Internet of Things and the mountains of big data that they generate will combine to continue to increase demand for cloud-resident infrastructure, particularly servers and storage, both of which present Intel with an opportunity to sell semiconductors. Needless to say, Intel laced their presentations with frequent reminders about who was the king of semiconductor manufacturingJ

Read more

IBM Makes Major Commitment to Flash

Richard Fichera

 

Wisdom from the Past

In his 1956 dystopian sci-fi novel “The City and the Stars”, Arthur C. Clarke puts forth the fundamental design tenet for making eternal machines, “A machine shall have no moving parts”. To someone from the 1950s current computers would appear to come close to that ideal – the CPUs and memory perform silent magic and can, with some ingenuity, be passively cooled, and invisible electronic signals carry information in and out of them to networks and … oops, to rotating disks, still with us after more than five decades[i]. But, as we all know, salvation has appeared on the horizon in the form of solid-state storage, so called flash storage (actually an idea of several decades standing as well, just not affordable until recently).

The initial substitution of flash for conventional storage yields immediate gratification in the form of lower power, maybe lower cost if used effectively, and higher performance, but the ripple effect benefits of flash can be even more pervasive. However, the implementation of the major architectural changes engendered across the whole IT stack by the use of flash is a difficult conceptual challenge for users and largely addressed only piecemeal by most vendors. Enter IBM and its Flashahead initiative.

What is Happening?

On Friday, April 11, IBM announced a major initiative, to the tune of a spending commitment of $1B, to accelerate the use of flash technology by means of three major programs:

·        Fundamental flash R&D

·        New storage products built on flash-only memory technology

Read more

Better, Faster, Cheaper - Storage needs to be all three

Henry Baltazar

For the vast majority of Forrester customers who I have not had the pleasure of meeting, my name is Henry Baltazar and I'm the new analyst covering Storage for the I&O team. I've covered the Storage industry for over 15 years and spent the first 9 years of my career as a Technical Analyst at eWEEK/PCWeek Labs, where I was responsible for benchmarking storage systems, servers and Network Operating Systems.  

During my lab days, I tested hundreds of different products and was fortunate to witness the development and maturation of a number of key innovations such as data deduplication, WAN optimization and scale-out storage.  In the technology space "Better, Faster, Cheaper - Pick Two" used to be the design goal for many innovators, and I've seen many technologies struggle to attain two, let alone three of these goals, especially in the first few product iterations.  For example, while iSCSI was able to challenge Fibre Channel on the basis of being cheaper - despite being around for over a decade many storage professionals are still not convinced that iSCSI is faster or better.

Looking at storage technologies today, relative to processors and networking, storage has not held up its end of the bargain.  Storage needs to improve in all three vectors to either push innovation forward, or avoid being viewed as a bottleneck in the infrastructure.  At Forrester I will be looking at a number of areas of innovation which should drive enterprise storage capabilities to new heights including:

Read more

HP Shows its Next Generation Blade and Converged Infrastructure – No Revolution, but Strong Evolution

Richard Fichera

With the next major spin of Intel server CPUs due later this year, HP’s customers have been waiting for HP’s next iteration of its core c-Class BladeSystem, which has been on the market for almost 7 years without any major changes to its basic architecture. IBM made a major enhancement to its BladeCenter architecture, replacing it with the new Pure Systems, and Cisco’s offering is new enough that it should last for at least another three years without a major architectural refresh, leaving HP customers to wonder when HP was going to introduce its next blade enclosure, and whether it would be compatible with current products.

At their partner conference this week, HP announced a range of enhancements to its blade product line that on combination represent a strong evolution of the current product while maintaining compatibility with current investments. This positioning is similar to what IBM did with its BladeCenter to BladeCenter-H upgrade, preserving current customer investment and extending the life of the current server and peripheral modules for several more years.

Tech Stuff – What Was Announced

Among the goodies announced on February 19 was an assortment of performance and functionality enhancements, including:

  • Platinum enclosure — The centerpiece of the announcement was the new c7000 Platinum enclosure, which boosts the speed of the midplane signal paths from 10 GHz to 14GHz, for an increase of 40% in raw bandwidth of the critical midplane, across which all of the enclosure I/O travels. In addition to the increased bandwidth midplane, the new enclosure incorporates location aware sensors and also doubles the available storage bandwidth.
Read more

EMC And VMware Carve Out Pivotal: Good News For I&O Pros And The Virtualization Market

Dave Bartoletti

So what does VMware and EMC’s announcement of the new Pivotal Initiative mean for I&O leaders? Put simply, it means the leading virtualization vendor is staying focused on the data center — and that’s good news. As many wise men have said, the best strategy comes from knowing what NOT to do. In this case, that means NOT shifting focus too fast and too far afield to the cloud.

I think this is a great move, and makes all kinds of sense to protect VMware’s relationship with its core buyer, maintain focus on the datacenter, and lay the foundation for the vendor’s software-defined data center strategy. This move helps to end the cloud-washing that’s confused customers for years: There’s a lot of work left to do to virtualize the entire data center stack, from compute to storage and network and apps, and the easy apps, by now, have mostly been virtualized. The remaining workloads enterprises seek to virtualize are much harder: They don’t naturally benefit from consolidation savings, they are highly performance sensitive, and they are much more complex.

Read more

Dell’s Turn For Infrastructure Announcements — Common Theme Emerging For 2012?

Richard Fichera

Last week it was Dell’s turn to tout its new wares, as it pulled back the curtain on its 12th-eneration servers and associated infrastructure. I’m still digging through all the details, but at first glance it looks like Dell has been listening to a lot of the same customer input as HP, and as a result their messages (and very likely the value delivered) are in many ways similar. Among the highlights of Dell’s messaging are:

  • Faster provisioning with next-gen agentless intelligent controllers — Dell’s version is iDRAC7, and in conjunction with its LifeCyle Controller firmware, Dell makes many of the same claims as HP, including faster time to provision and maintain new servers, automatic firmware updates, and many fewer administrative steps, resulting in opex savings.
  • Intelligent storage tiering and aggressive use of flash memory, under the aegis of Dell’s “Fluid Storage” architecture, introduced last year.
  • A high-profile positioning for its Virtual Network architecture, building on its acquisition of Force10 Networks last year. With HP and now Dell aiming for more of the network budget in the data center, it’s not hard to understand why Cisco was so aggressive in pursuing its piece of the server opportunity — any pretense of civil coexistence in the world of enterprise networks is gone, and the only mutual interest holding the vendors together is their customers’ demand that they continue to play well together.
Read more

HP Announces Gen8 Servers – Focus On Opex And Improving SLAs Sets A High Bar For Competitors

Richard Fichera

On Monday, February 13, HP announced its next turn of the great wheel for servers with the announcement of its Gen8 family of servers. Interestingly, since the announcement was ahead of Intel’s official announcement of the supporting E5 server CPUs, HP had absolutely nothing to say about the CPUs or performance of these systems. But even if the CPU information had been available, it would have been a sideshow to the main thrust of the Gen8 launch — improving the overall TCO (particularly Opex) of servers by making them more automated, more manageable, and easier to remediate when there is a problem, along with enhancements to storage, data center infrastructure management (DCIM) capabilities, and a fundamental change in the way that services and support are delivered.

With a little more granularity, the major components of the Gen8 server technology announcement included:

  • Onboard Automation – A suite of capabilities and tools that provide improved agentless local intelligence to allow quicker and lower labor cost provisioning, including faster boot cycles, “one click” firmware updates of single or multiple systems, intelligent and greatly improved boot-time diagnostics, and run-time diagnostics. This is apparently implemented by more powerful onboard management controllers and pre-provisioning a lot of software on built-in flash memory, which is used by the onboard controller. HP claims that the combination of these tools can increase operator productivity by up to 65%. One of the eye-catching features is an iPhone app that will scan a code printed on the server and go back through the Insight Management Environment stack and trigger the appropriate script to provision the server.[i]Possibly a bit of a gimmick, but a cool-looking one.
Read more

Dell World – New Image. New Company?

Richard Fichera

I just spent several days at Dell World, and came away with the impression of a company that is really trying to change its image. Old Dell was boxes, discounts and low cost supply chain. New Dell is applications, solution, cloud (now there’s a surprise!) and investments in software and integration. OK, good image, but what’s the reality? All in all, I think they are telling the truth about their intentions, and their investments continue to be aligned with these intentions.

As I wrote about a year ago, Dell seems to be intent on climbing up the enterprise food chain. It’s investment in several major acquisitions, including Perot Systems for services and a string of advanced storage, network and virtual infrastructure solution providers has kept the momentum going, and the products have been following to market. At the same time I see solid signs of continued investment in underlying hardware, and their status as he #1 x86 server vendor in N. America and #2 World-Wide remains an indication of their ongoing success in their traditional niches. While Dell is not a household name in vertical solutions, they have competent offerings in health care, education and trading, and several of the initiatives I mentioned last year are definitely further along and more mature, including continued refinement of their VIS offerings and deep integration of their much-improved DRAC systems management software into mainstream management consoles from VMware and Microsoft.

Read more

Oracle Open World Part 2 – Flash Mobs And The Quest For Performance

Richard Fichera

Well actually I meant mobs of flash, but I couldn’t resist the word play. Although, come to think of it, flash mobs might be the right way to describe the density of flash memory system vendors here at Oracle Open World. Walking around the exhibits it seems as if every other booth is occupied by someone selling flash memory systems to accelerate Oracle’s database, and all of them claiming to be: 1) faster than anything that Oracle, who already integrates flash into its systems, offers, and 2) faster and/or cheaper than the other flash vendor two booths down the aisle.

All joking aside, the proliferation of flash memory suppliers is pretty amazing, although a venue devoted to the world’s most popular database would be exactly where you might expect to find them. In one sense flash is nothing new – RAM disks, arrays of RAM configured to mimic a disk, have been around since the 1970s but were small and really expensive, and never got on a cost and volume curve to drive them into a mass-market product. Flash, benefitting not only from the inherent economies of semiconductor technology but also from the drivers of consumer volumes, has the transition to a cost that makes it a reasonable alternative for some use case, with database acceleration being probably the most compelling. This explains why the flash vendors are gathered here in San Francisco this week to tout their wares – this is the richest collection of potential customers they will ever see in one place.

Read more

Oracle Open World Part 1 – The Circus Comes To Town And The Acts Are Great!

Richard Fichera

In the good old days, computer industry trade shows were bigger than life events – booths with barkers and actors, ice cream and espresso bars and games in the booth, magic acts and surging crowds gawking at technology. In recent years, they have for the most part become sad shadows of their former selves. The great SHOWS are gone, replaced with button-down vertical and regional events where you are lucky to get a pen or a miniature candy bar for your troubles.

Enter Oracle OpenWorld. Mix 45,000 people, hundreds of exhibitors, one of the world’s largest software and systems company looking to make an impression, and you have the new generation of technology extravaganza. The scale is extravagant, taking up the entire Moscone Center complex (N, S and W) along with a couple of hotel venues, closing off a block of a major San Francisco street for a week, and throwing a little evening party for 20 or 30 thousand people.

But mixed with the hoopla, which included wheel of fortune giveaways that had hundreds of people snaking around the already crowded exhibition floor in serpentine lines, mini golf and whack-a-mole-games in the exhibit booths along with the aforementioned espresso and ice cream stands, there was genuine content and the public face of some significant trends. So far, after 24 hours, some major messages come through loud and clear:

Read more