Microsoft’s Cleans Its Windows Licensing To Reveal The Path To BYOD

Duncan Jones

Today’s announcement by Microsoft of a per-user subscription licensing (USL) option for Windows is significant, and good, news for its customers. I’ve been telling Microsoft product managers for years to phase out their obsolete per-device licensing models, and this is a major step in that direction. it marks a major change in Microsoft’s attitude to bring-your-own-device (BYOD) programs involving non-Windows devices such as Apple Macs and Android tablets.

Source: Microsoft

Previously Microsoft tried to discourage customers from using virtual desktop infrastructure (VDI) on top of rival operating systems by applying complex licensing rules involving various TLAs such as RUR, VDA and CSL (which I’m not going to explain here, because they are, thankfully, no longer needed). The USL is far simpler - clear Windows licensing replacing translucent frosted glass, so to speak.

Read more

The UK Government’s Drive To Improve Public Sector Technology Procurement Is Fundamentally Flawed

Duncan Jones

Transformation Should Focus On Improving Outcomes, Not Merely On Increasing Competition

I’ve spoken with many IT Procurement leaders in public sector organizations ranging from US county schools districts to national governments. Most are prevented from applying best practices such as Strategic Software Sourcing by their politicians’ ill-conceived edicts and directives, such as those included in this announcement by the UK’s Cabinet Office that optimistically claims “Government draws the line on bloated and wasteful IT contracts”. In related press interviews the relevant minister Francis Maude complained that “a tiny oligopoly dominates the marketplace” and talked about his intention to encourage use of open source alternatives to products such as Microsoft Office, to increase competition and to divert more spend to small and medium-sized IT companies. The new edicts include bans of contracts over £100 million or 2 years’ duration and of automatic renewals. Mr. Maude claims these rules “will ensure the government gets the best technology at the best price”.

Mr. Maude and his team have a laudable and important goal but their approach is misguided, in my opinion. Short term contracts, indiscriminate competition and avoiding sole source category strategies will deliver neither the best technology nor the best price, because:

Read more

Tectonic Shift In The ARM Ecosystem — AMD Announces ARM Intentions

Richard Fichera

Earlier this week, in conjunction with ARM Holdings plc’s announcement of the upcoming Cortex A53 and A57, full 64-bit CPU implementations based on the ARM V8 specification, AMD also announced that it would be designing and selling SOC (System On a Chip) products based on this technology in 2014, roughly coinciding with availability of 64-bit parts from ARM and other partners.

This is a major event in the ARM ecosystem. AMD, while much smaller than Intel, is still a multi-billion-dollar enterprise, and for the second largest vendor of x86 chips to also throw its hat into the ARM ecosystem and potentially compete with its own mainstream server and desktop CPU business is an aggressive move on the part of AMD management that carries some risk and much potential advantage.

Reduced to its essentials, what AMD announced (and in some cases hinted at):

  • Intention to produce A53/A57 SOC modules for multiple server segments. There was no formal statement of intentions regarding tablet/mobile devices, but it doesn’t take a rocket scientist to figure out that AMD wants a piece of this market, and ARM is a way to participate.
  • The announcement is wider that just the SOC silicon. AMD also hinted at making a range of IP, including its fabric architecture from the SeaMicro architecture, available in the form of “reusable IP blocks.” My interpretation is that it intends to make the fabric, reference architectures, and various SOCs available to its hardware system partners.
Read more

Microsoft Announces Windows Server 2012

Richard Fichera

The Event

On Tuesday, September 4, Microsoft made the official announcement of Windows Server 2012, ending what has seemed like an interminable sequence of rumors, Beta releases, and endless speculation about this successor to Windows Server 2008.

So, is it worth the wait and does it live up to its hype? All omens point to a resounding “YES.”

Make no mistake, this is a really major restructuring of the OS, and a major step-function in capabilities aligned with several major strategic trends for both Microsoft and the rest of the industry. While Microsoft’s high level message is centered on the cloud, and on the Windows Server 2012 features that make it a productive platform upon which both enterprises and service providers can build a cost-effective cloud, its features will be immensely valuable to a wide range of businesses.

What It Does

The reviewers guide for Windows Server 2012 is over 220 pages long, and the OS has at least 100 features that are worth noting, so a real exploration of the features of this OS is way beyond what I can do here. Nonetheless, we can look at several buckets of technology to get an understanding of the general capabilities. Also important to note is that while Microsoft has positioned this as a very cloud-friendly OS, almost all of these cloud-related features are also very useful to an enterprise IT environment.

  • New file system — Included in WS2012 is ReFS, a new file system designed to survive failures that would bring down or corrupt the previous NTFS file system (which is still available). Combined with improvements in cluster management and failover, this is a capability that will play across the entire user spectrum.
Read more

Cisco’s Turn At Bat, Introduces Next Generation Of UCS

Richard Fichera

Next up in the 2012 lineup for the Intel E5 refresh cycle of its infrastructure offerings is Cisco, with its announcement last week of what it refers to as its third generation of fabric computing. Cisco announced a combination of tangible improvements to both the servers and the accompanying fabric components, as well as some commitments for additional hardware and a major enhancement of its UCS Manager software immediately and later in 2012. Highlights include:

  • New servers – No surprise here, Cisco is upgrading its servers to the new Intel CPU offerings, leading with its high-volume B200 blade server and two C-Series rack-mount servers, one a general-purpose platform and the other targeted at storage-intensive requirements. On paper, the basic components of these servers sound similar to competitors – new E5 COUs, faster I/O, and more memory. In addition to the servers announced for March availability, Cisco stated that it would be delivering additional models for ultra-dense computing and mission-critical enterprise workloads later in the year.
  • Fabric improvements – Because Cisco has a relatively unique architecture, it also focused on upgrades to the UCS fabric in three areas: server, enclosure, and top-level interconnect. The servers now have an optional improved virtual NIC card with support for up to 128 VLANs per adapter and two 20 GB ports per adapter. One in on the motherboard and another can be plugged in as a mezzanine card, giving up to 80 GB bandwidth to each server. The Fabric Interconnect, the component that connects each enclosure to the top-level Fabric Interconnect, has seen its bandwidth doubled to a maximum of 160 GB. The Fabric Interconnect, the top of the UCS management hierarchy and interface to the rest of the enterprise network, has been up graded to a maximum of 96 universal 10Gb ports (divided between downlinks to the blade enclosures and uplinks to the enterprise fabric.
Read more

Dell’s Turn For Infrastructure Announcements — Common Theme Emerging For 2012?

Richard Fichera

Last week it was Dell’s turn to tout its new wares, as it pulled back the curtain on its 12th-eneration servers and associated infrastructure. I’m still digging through all the details, but at first glance it looks like Dell has been listening to a lot of the same customer input as HP, and as a result their messages (and very likely the value delivered) are in many ways similar. Among the highlights of Dell’s messaging are:

  • Faster provisioning with next-gen agentless intelligent controllers — Dell’s version is iDRAC7, and in conjunction with its LifeCyle Controller firmware, Dell makes many of the same claims as HP, including faster time to provision and maintain new servers, automatic firmware updates, and many fewer administrative steps, resulting in opex savings.
  • Intelligent storage tiering and aggressive use of flash memory, under the aegis of Dell’s “Fluid Storage” architecture, introduced last year.
  • A high-profile positioning for its Virtual Network architecture, building on its acquisition of Force10 Networks last year. With HP and now Dell aiming for more of the network budget in the data center, it’s not hard to understand why Cisco was so aggressive in pursuing its piece of the server opportunity — any pretense of civil coexistence in the world of enterprise networks is gone, and the only mutual interest holding the vendors together is their customers’ demand that they continue to play well together.
Read more

HP Announces Gen8 Servers – Focus On Opex And Improving SLAs Sets A High Bar For Competitors

Richard Fichera

On Monday, February 13, HP announced its next turn of the great wheel for servers with the announcement of its Gen8 family of servers. Interestingly, since the announcement was ahead of Intel’s official announcement of the supporting E5 server CPUs, HP had absolutely nothing to say about the CPUs or performance of these systems. But even if the CPU information had been available, it would have been a sideshow to the main thrust of the Gen8 launch — improving the overall TCO (particularly Opex) of servers by making them more automated, more manageable, and easier to remediate when there is a problem, along with enhancements to storage, data center infrastructure management (DCIM) capabilities, and a fundamental change in the way that services and support are delivered.

With a little more granularity, the major components of the Gen8 server technology announcement included:

  • Onboard Automation – A suite of capabilities and tools that provide improved agentless local intelligence to allow quicker and lower labor cost provisioning, including faster boot cycles, “one click” firmware updates of single or multiple systems, intelligent and greatly improved boot-time diagnostics, and run-time diagnostics. This is apparently implemented by more powerful onboard management controllers and pre-provisioning a lot of software on built-in flash memory, which is used by the onboard controller. HP claims that the combination of these tools can increase operator productivity by up to 65%. One of the eye-catching features is an iPhone app that will scan a code printed on the server and go back through the Insight Management Environment stack and trigger the appropriate script to provision the server.[i]Possibly a bit of a gimmick, but a cool-looking one.
Read more

HP And Cisco Bury The Hatchet To Accommodate Customers – Everyone Wins?

Richard Fichera

In a surprising move, HP and Cisco announced that HP will be reselling a custom-developed Cisco Nexus switch, the “Cisco Nexus B22 Fabric Extender for HP,” commonly called a FEX in Cisco speak. What is surprising about this is that the FEX is a key component of Cisco’s Nexus switch technology as well as an integral component of Cisco’s UCS server product, the introduction of which has pitted the two companies in direct and bitter competition in the heart of HP’s previously sacrosanct server segment. Combined with HP’s increasing focus on networking, the companies have not been the best of buds for the past couple of years. Accordingly, this announcement really makes us sit up and take notice.

So what drove this seeming rapprochement? The coined word “coopetition” lacks the flavor of the German “Realpolitik,” but the essence is the same – both sides profit from accommodating a real demand from customers for Cisco network technology in HP BladeSystem servers. And like the best of deals, both sides walk away thinking that they got the best of the other. HP answers the demands of what is probably a sizable fraction of their customer base for better interoperability with Cisco Nexus-based networks, and in doing so expects to head off customer defections to Cisco UCS servers. Cisco gets both money (the B22 starts at around $10,000 per module and most HP BladeSystem customers who use it will probably buy at least two per enclosure, so making a rough guess at OEM pricing, Cisco is going to make as much as $8,000 to $10,000 per chassis from HP BladeSystems that use the B22) from the sale of the Cisco-branded modules as well as exposure of Cisco technology to HP customers, with the hope that they will consider UCS for future requirements.

Read more

DCIM And The New Reality Of Infrastructure & Operations

Richard Fichera

I recently published an update on power and cooling in the data center (http://www.forrester.com/go?docid=60817), and as I review it online, I am struck by the combination of old and new. The old – the evolution of semiconductor technology, the increasingly elegant attempts to design systems and components that can be incrementally throttled, and the increasingly sophisticated construction of the actual data centers themselves, with increasing modularity and physical efficiency of power and cooling.

The new is the incredible momentum I see behind Data Center Infrastructure Management software. In a few short years, DCIM solutions have gone from simple aggregated viewing dashboards to complex software that understands tens of thousands of components, collects, filters and analyzes data from thousands of sensors in a data center (a single CRAC may have in excess of 20 sensors, a server over a dozen, etc.) and understands the relationships between components well enough to proactively raise alarms, model potential workload placement and make recommendations about prospective changes.

Of all the technologies reviewed in the document, DCIM offers one of the highest potentials for improving overall efficiency without sacrificing reliability or scalability of the enterprise data center. While the various DCIM suppliers are still experimenting with business models, I think that it is almost essential for any data center operations group that expects significant change, be it growth, shrinkage, migration or a major consolidation or cloud project, to invest in DCIM software. DCIM consumers can expect to see major competitive action among the current suppliers, and there is a strong potential for additional consolidation.

SAP’s Acquisition Of Crossgate Fills A Significant Gap In Its ePurchasing Portfolio

Duncan Jones

Yesterday, SAP announced its intention to acquire business-to-business (B2B) integration provider Crossgate http://www.sap.com/index.epx#/news-reader/?articleID=17515. This was no great surprise, as SAP was already a part-owner and worked closely with the company in product development and marketing and sales activities. SAP will be able to offer a much better ePurchasing solution to customers when it has integrated Crossgate into its business, because supplier connectivity is currently a significant weakness. As I’ve written before (So Where Were The Best Run Businesses Then?), many SRM implementations rely on suppliers manually downloading PO from supplier portals or manually extracting them from emails and rekeying the data into their own systems. Not only does this cost the suppliers lots of money, it creates delays and errors that discourage users from adopting SRM.

SAP doesn’t intend to use Crossgate only for transactional processes; it also wants to develop support for wider collaboration between its customers and their supply chain partners, both upstream and downstream. That’s a sound objective, although not an easy one for SAP to achieve, because its core competence is in rigidly structured internal processes and it hasn’t done a good job to date with unstructured processes, nor with ones that go outside the enterprise’s four walls. Buyers who think they can force suppliers to comply with their edicts, just like employees do, soon end up wondering why no-one is using their ePurchasing solution.

What does the acquisition mean for sourcing professionals who are wondering where Crossgate or its competitors fit into their application strategy? My take:

Read more