Linux vs Unix Hot Patching – Have We Reached The Tipping Point?

Richard Fichera

The Background – Linux as a Fast Follower and the Need for Hot Patching

No doubt about it, Linux has made impressive strides in the last 15 years, gaining many features previously associated with high-end proprietary Unix as it made the transition from small system plaything to core enterprise processing resource and the engine of the extended web as we know it. Along the way it gained reliable and highly scalable schedulers, a multiplicity of efficient and scalable file systems, advanced RAS features, its own embedded virtualization and efficient thread support.

As Linux grew, so did supporting hardware, particularly the capabilities of the ubiquitous x86 CPU upon which the vast majority of Linux runs today. But the debate has always been about how close Linux could get to “the real OS”, the core proprietary Unix variants that for two decades defined the limits of non-mainframe scalability and reliability. But “the times they are a changing”, and the new narrative may be “when will Unix catch up to Linux on critical RAS features like hot patching”.

Hot patching, the ability to apply updates to the OS kernel while it is running, is a long sought-after but elusive feature of a production OS. Long sought after because both developers and operations teams recognize that bringing down an OS instance that is doing critical high-volume work is at best disruptive and worst a logistical nightmare, and elusive because it is incredibly difficult. There have been several failed attempts, and several implementations that “almost worked” but were so fraught with exceptions that they were not really useful in production.[i]

Read more

Architect Your Predictive Analytics Capability To Unleash The Power Of Digital Business

Charlie Dai

Predictive analytics has become the key to helping businesses — especially those in the highly dynamic Chinese market — create differentiated, individualized customer experiences and make better decisions. Enterprise architecture professionals must take a customer-oriented approach to developing their predictive analytics strategy and architecture.

I’ve recently published two reports focusing on how to architect predictive analytics capability. These reports analyze the trends around predictive analytics adoption in China and discuss four key areas that EA pros must focus on to accelerate digital transformation. They also show EA pros how to unleash the power of digital business by analyzing the predictive analytics practices of visionary Chinese firms. Some of the key takeaways:

  • Predictive analytics must cover the full customer life cycle and leverage business insights. Organizations require predictable insights into customer behaviors and business operations. Youmust implement predictive analytics solutions and deliver value to customers throughout their life cycle to differentiate your customer experience and sustain business growth.You should also realize the importance of business stakeholders and define effective mechanisms for translating their business knowledge into predictive algorithm inputs to optimize predictive models faster and generate deeper customer insights.
Read more

HPE Transforms Infrastructure Management with Synergy Composable Infrastructure Announcement

Richard Fichera

Background

I’ve written and commented in the past about the inevitability of a new class of infrastructure called “composable”, i.e. integrated server, storage and network infrastructure that allowed its users to “compose”, that is to say configure, a physical server out of a collection of pooled server nodes, storage devices and shared network connections.[i]

The early exemplars of this class were pioneering efforts from Egenera and  blade systems from Cisco, HP, IBM and others, which allowed some level of abstraction (a necessary precursor to composablity) of server UIDs including network addresses and storage bindings, and introduced the notion of templates for server configuration. More recently the Dell FX and the Cisco UCS M-Series servers introduced the notion of composing of servers from pools of resources within the bounds of a single chassis.[ii] While innovative, they were early efforts, and lacked a number of software and hardware features that were required for deployment against a wide spectrum of enterprise workloads.

What’s New?

This morning, HPE put a major marker down in the realm of composable infrastructure with the announcement of Synergy, its new composable infrastructure system. HPE Synergy represents a major step-function in capabilities for core enterprise infrastructure as it delivers cloud-like semantics to core physical infrastructure. Among its key capabilities:

Read more

Oracle Delivers “Software on Silicon” – Doubles Down on Optimizing its Own Software with Latest Hardware

Richard Fichera

What’s new?

Looking at Oracle’s latest iteration of its SPARC processor technology, the new M7 CPU, it is at first blush an excellent implementation of SPARC, with 32 cores with 8 threads each implemented in an aggressive 20 nm process and promising a well-deserved performance bump for legacy SPARC/Solaris users. But the impact of the M7 goes beyond simple comparisons to previous generations of SPARC and competing products such as Intel’s Xeon E7 and IBM POWER 8. The M7 is Oracle’s first tangible delivery of its “Software on Silicon” promise, with significant acceleration of key software operations enabled in the M7 hardware.[i]

Oracle took aim at selected performance bottlenecks and security exposures, some specific to Oracle software, and some generic in nature but of great importance. Among the major enhancements in the M7 are:[ii]

  • Cryptography – While many CPUs now include some form of acceleration for cryptography, Oracle claims the M7 includes a wider variety and deeper support, resulting in almost indistinguishable performance across a range of benchmarks with SSL and other cryptographic protocols enabled. Oracle claims that the M7 is the first CPU architecture that does not present users with the choice of secure or fast, but allows both simultaneously.
Read more

Oracle Gifts Itself Another Data Platform

Fatemeh Khatibloo

Industry analysts know that major M&A deals, product announcement, and organizational changes can come at any time. But it still surprises us a little when a major player like Oracle announces a significant acquisition just days before Christmas. At any rate, Santa has come early for both Mr. Ellison and the Datalogix team this year.

We've just published a Quick Take on our perceptions of the deal, which holds a lot of promise. Our biggest concern? Realizing that promise requires some serious integration work, and so far, Oracle hasn't proven that it's especially capable of integrating the stack it's acquired for the Marketing Cloud offering. We also worry that Oracle's Data Cloud -- where Datalogix will sit -- is heading directly for a major privacy warzone. Whether Oracle is ready for that battle remains to be seen.

But the bigger picture is this: the Datalogix and Bluekai acquisitions, along with many others of the past year -- including Conversant by Epsilon, LiveRamp by Acxiom, and Adometry by Google -- are evidence of a fast-consolidating marketing and advertising technology landscape. 2015 will doubtless bring more M&A activity in this space, with a likely run on smaller technology and data vendors that have mostly been flying under the radar. What this race for the ultimate "marketing cloud" will mean to CI pros remains to be seen, but you should certainly anticipate plenty of shakeups in your vendor relationships over the next 18 months. 

Read more

IBM Sheds Yet Another Hardware Business - Pays To Get Rid Of Semiconductor Fabrication

Richard Fichera
While the timing of the event comes as a surprise, the fact that IBM has decided to unload its technically excellent but unprofitable semiconductor manufacturing operation does not, nor does its choice of Globalfoundries, with whom it has had a longstanding relationship.
 
Read more

Now Some IBM Customers Might Be Able To Take Advantage Of Cheaper Third-Party Software Support

Mark Bartrick

Chief information officers (CIOs) are dedicating more of their budgets to what we call “systems of engagement” (technologies that help win, serve, and retain customers) rather than “systems of record” (back-office technologies). According to research here at Forrester, new business investment in the former will be eight times that of the latter in 2014. All of which means CIOs are re-examining their back-office legacy spend to see what savings can be made to fund new front-office innovations.

But releasing back-office spend is not easy. For many companies, most of the ‘easy’ savings have already been achieved - so squeezing even more savings has become a tougher game. For example, you can only try to re-negotiate legacy support costs a few times before the vendors say ‘enough is enough’. While such comments may have discouraged negotiators in past, the advent of third party software support in the last five years has, for Oracle and SAP users at least, kicked the cost savings door back open and given fresh impetus to procurement people seeking to reduce software support costs.

I am sure that many of you have read some of my previous comments on the emergence of the third party software support market over the past number of years. Companies like Rimini Street, Spinnaker Support and Alui have saved some Oracle and SAP clients a lot of money. For companies who have moved to third party support, or who have simply used the threat of moving to third party support in order to drive the vendor’s costs lower, the savings they are enjoying have freed up cash to spend on new innovations and front-office client engaging stuff.

Read more

Oracle's Q3 falls short of market expectations (again). So expect to get a call from your Oracle sales rep sometime soon.

Mark Bartrick

Oracle has missed revenue expectations for three quarters in a row now as its Q3 results fell short of market expectations. The company blamed currency fluctuations and the strength of the US dollar for this latest miss.

The company reported third quarter earnings of $2.6 billion on revenue of $9.3 billion. Wall Street expected to Oracle to report fiscal third quarter revenue of $9.36 billion.

To be fair, Oracle did deliver some good data points. For instance, hardware system product revenue for the third quarter was $725 million, up 8 percent from a year ago. Software license and support revenue was up 5 percent to $4.6 billion and new software licenses and cloud subscriptions were up 4 percent from a year ago to $2.4 billion. Oracle says its outlook for the fourth quarter was solid. Safra Catz, Oracle co-president, said revenue growth in the fourth quarter will be between 3 percent and 7 percent.

Oracle won’t want to miss Quarterly earnings expectations again and will expect their sales teams to outperform in the next couple of months. All of which bodes well for an exciting run up to Oracle’s fiscal year end on May 31st.

Here are three quick tips to bear in mind as you prepare to negotiate with Oracle:

1.        If you have an Oracle contract up for negotiation this quarter, then you should leverage the pressure Oracle sales are under to hit market expectations by squeezing an extra point or two of discount in return for a signed contract.

2.        If you have a support renewal coming up, remember you have a choice now and third parties like Rimini Street, Spinnaker Support and Alui can give you real leverage at the negotiating table.

Read more

Categories:

The Recent Ruling In Oracle vs Rimini Street Has Significant Implications For The Wider Outsourcing Industry

Duncan Jones

I've just published a Quick Take report that explains why the Nevada District Court’s recent decision on some of the issues in the four-year-old Oracle versus Rimini Street case has significant implications for sourcing professionals — and, indeed, the entire technology services industry — beyond its impact on the growing third-party support (3SP) market.

http://www.forrester.com/Quick+Take+The+Rimini+Street+Ruling+Has+Serious+Implications+For+Oracle+Customers/fulltext/-/E-RES115572

Read more

Intel Bumps up High-End Servers with New Xeon E7 V2 - A Long Awaited and Timely Leap

Richard Fichera

The long draught at the high-end

It’s been a long wait, about four years if memory serves me well, since Intel introduced the Xeon E7, a high-end server CPU targeted at the highest performance per-socket x86, from high-end two socket servers to 8-socket servers with tons of memory and lots of I/O. In the ensuing four years (an eternity in a world where annual product cycles are considered the norm), subsequent generations of lesser Xeons, most recently culminating in the latest generation 22 nm Xeon E5 V2 Ivy Bridge server CPUs, have somewhat diluted the value proposition of the original E7.

So what is the poor high-end server user with really demanding single-image workloads to do? The answer was to wait for the Xeon E7 V2, and at first glance, it appears that the wait was worth it. High-end CPUs take longer to develop than lower-end products, and in my opinion Intel made the right decision to skip the previous generation 22nm Sandy Bridge architecture and go to Ivy Bridge, it’s architectural successor in the Intel “Tick-Tock” cycle of new process, then new architecture.

What was announced?

The announcement was the formal unveiling of the Xeon E7 V2 CPU, available in multiple performance bins with anywhere from 8 to 15 cores per socket. Critical specifications include:

  • Up to 15 cores per socket
  • 24 DIMM slots, allowing up to 1.5 TB of memory with 64 GB DIMMs
  • Approximately 4X I/O bandwidth improvement
  • New RAS features, including low-level memory controller modes optimized for either high-availability or performance mode (BIOS option), enhanced error recovery and soft-error reporting
Read more