A few months ago, I blogged about testing quality@speed in the same way that F1 racing teams do to win races and fans. Last week, I published my F(TA)1 Forrester Wave! It examines the capabilities of nine vendors to evaluate how they support Agile development and continuous delivery teams when it comes to continuous testing: Borland, CA Technologies, HP, IBM, Microsoft, Parasoft, SmartBear, TestPlant, and Tricentis. However, only Forrester clients can attend “the race” to see the leaders.
The market overview section of our evaluation complements the analysis in the underlying model by looking at other providers that either augment FTA capabilities, play in a different market segment, or did not meet one of the criteria for inclusion in the Forrester Wave. These include: 1) open source tools like Selenium and Sahi, 2) test case design and automation tools like Grid-Tools Agile Designer, and 3) other tools, such as Original Software, which mostly focuses on graphical user interface (GUI) and packaged apps testing, and Qualitia and Applitools, which focus on GUI and visualization testing.
We deliberately weighted the Forrester Wave criteria more heavily towards “beyond GUI” and API testing approaches. Why? Because:
Recently we’ve had a chance to look again at two very conflicting views from HP and Facebook on how to do web-scale and cloud computing, both announced at the recent OCP annual event in California.
From HP come its new CloudLine systems, the public face of their joint venture with Foxcon. Early details released by HP show a line of cost-optimized servers descended from a conventional engineering lineage and incorporating selected bits of OCP technology to reduce costs. These are minimalist rack servers designed, after stripping away all the announcement verbiage, to compete with white-box vendors such as Quanta, SuperMicro and a host of others. Available in five models ranging from the minimally-featured CL1100 up through larger nodes designed for high I/O, big data and compute-intensive workloads, these systems will allow large installations to install capacity at costs ranging from 10 – 25% less than the equivalent capacity in their standard ProLiant product line. While the strategic implications of HP having to share IP and market presence with Foxcon are still unclear, it is a measure of HP’s adaptability that they were willing to execute on this arrangement to protect against inroads from emerging competition in the most rapidly growing segment of the server market, and one where they have probably been under immense margin pressure.
Intel has made no secret of its development of the Xeon D, an SOC product designed to take Xeon processing close to power levels and product niches currently occupied by its lower-power and lower performance Atom line, and where emerging competition from ARM is more viable.
The new Xeon D-1500 is clear evidence that Intel “gets it” as far as platforms for hyperscale computing and other throughput per Watt and density-sensitive workloads, both in the enterprise and in the cloud are concerned. The D1500 breaks new ground in several areas:
It is the first Xeon SOC, combining 4 or 8 Xeon cores with embedded I/O including SATA, PCIe and multiple 10 nd 1 Gb Ethernet ports.
It is the first of Intel’s 14 nm server chips expected to be introduced this year. This expected process shrink will also deliver a further performance and performance per Watt across the entire line of entry through mid-range server parts this year.
Why is this significant?
With the D-1500, Intel effectively draws a very deep line in the sand for emerging ARM technology as well as for AMD. The D1500, with 20W – 45W power, delivers the lower end of Xeon performance at power and density levels previously associated with Atom, and close enough to what is expected from the newer generation of higher performance ARM chips to once again call into question the viability of ARM on a pure performance and efficiency basis. While ARM implementations with embedded accelerators such as DSPs may still be attractive in selected workloads, the availability of a mainstream x86 option at these power levels may blunt the pace of ARM design wins both for general-purpose servers as well as embedded designs, notably for storage systems.
We have been watching many variants on efficient packaging of servers for highly scalable workloads for years, including blades, modular servers, and dense HPC rack offerings from multiple vendors, most of the highly effective, and all highly proprietary. With the advent of Facebook’s Open Compute Project, the table was set for a wave of standardized rack servers and the prospect of very cost-effective rack-scale deployments of very standardized servers. But the IP for intelligently shared and managed power and cooling at a rack level needed a serious R&D effort that the OCP community, by and large, was unwilling to make. Into this opportunity stepped Intel, which has been quietly working on its internal Rack Scale Architecture (RSA) program for the last couple of years, and whose first product wave was officially outed recently as part of an announcement by Intel and Ericsson.
While not officially announcing Intel’s product nomenclature, Ericsson announced their “HDS 8000” based on Intel’s RSA, and Intel representatives then went on to explain the fundamental of RSA, including a view of the enhancements coming this year.
RSA is a combination of very standardized x86 servers, a specialized rack enclosure with shared Ethernet switching and power/cooling, and layers of firmware to accomplish a set of tasks common to managing a rack of servers, including:
· Asset discovery
· Switch setup and management
· Power and cooling management across the servers with the rack
Last year I published a reasonably well-received research document on Hadoop infrastructure, “Building the Foundations for Customer Insight: Hadoop Infrastructure Architecture”. Now, less than a year later it’s looking obsolete, not so much because it was wrong for traditional (and yes, it does seem funny to use a word like “traditional” to describe a technology that itself is still rapidly evolving and only in mainstream use for a handful of years) Hadoop, but because the universe of analytics technology and tools has been evolving at light-speed.
If your analytics are anchored by Hadoop and its underlying map reduce processing, then the mainstream architecture described in the document, that of clusters of servers each with their own compute and storage, may still be appropriate. On the other hand, if, like many enterprises, you are adding additional analysis tools such as NoSQL databases, SQL on Hadoop (Impala, Stinger, Vertica) and particularly Spark, an in-memory-based analytics technology that is well suited for real-time and streaming data, it may be necessary to begin reassessing the supporting infrastructure in order to build something that can continue to support Hadoop as well as cater to the differing access patterns of other tools sets. This need to rethink the underlying analytics plumbing was brought home by a recent demonstration by HP of a reference architecture for analytics, publicly referred to as the HP Big Data Reference Architecture.
There is always a tendency to regard the major players in large markets as being a static background against which the froth of smaller companies and the rapid dance of customer innovation plays out. But if we turn our lens toward the major server vendors (who are now also storage and networking as well as software vendors), we see that the relatively flat industry revenues hide almost continuous churn. Turn back the clock slightly more than five years ago, and the market was dominated by three vendors, HP, Dell and IBM. In slightly more than five years, IBM has divested itself of highest velocity portion of its server business, Dell is no longer a public company, Lenovo is now a major player in servers, Cisco has come out of nowhere to mount a serious challenge in the x86 server segment, and HP has announced that it intends to split itself into two companies.
And it hasn’t stopped. Two recent events, the fracturing of the VCE consortium and the formerly unthinkable hook-up of IBM and Cisco illustrate the urgency with which existing players are seeking differential advantage, and reinforce our contention that the whole segment of converged and integrated infrastructure remains one of the active and profitable segments of the industry.
EMC’s recent acquisition of Cisco’s interest in VCE effectively acknowledged what most customers have been telling us for a long time – that VCE had become essentially an EMC-driven sales vehicle to sell storage, supported by VMware (owned by EMC) and Cisco as a systems platform. EMC’s purchase of Cisco’s interest also tacitly acknowledges two underlying tensions in the converged infrastructure space:
In 2014, Forrester outlined a new approach to marketing that requires brands to harness customer context to deliver self-perpetuating cycles of real-time, two-way, insight-driven interactions. In 2015, we’ll see more marketers obsess over customers’ context. As more interaction data floods customer databases and marketing automation systems, customer-obsessed marketing leaders will strive to orchestrate brand experiences that drive unprecedented levels of engagement. For example, we predict that:
Digital marketing investments will drive brand experiences across the customer life cycle. By the end of 2015, spend on digital marketing will top $67 billion — growing to 27% of all ad spend. In fact, we believe this will surpass TV spend by 2016; there’s more to the story than ad spend. We believe marketers will branch out of expected digital media buys to stimulate more insight-driven interactions with customers throughout the entire customer life cycle. Supported by new streams of situational customer data and powered by the ability to precisely target audiences with programmatic media buying, marketers will deliver highly engaging brand experiences rather than just feed the top of the funnel.
Dell today announced its new FX system architecture, and I am decidedly impressed.
Dell FX is a 2U flexible infrastructure building block that allows infrastructure architects to compose an application-appropriate server and storage infrastructure out of the following set of resources:
Multiple choices of server nodes, ranging from multi-core Atom to new Xeon E5 V3 servers. With configurations ranging from 2 to 16 server nodes per enclosure, there is pretty much a configuration point for most mainstream applications.
A novel flexible method of mapping disks from up to three optional disk modules, each with 16 drives - the mapping, controlled by the onboard management, allows each server to appear as if the disk is locally attached DASD, so no changes are needed in any software that thinks it is accessing local storage. A very slick evolution in storage provisioning.
A set of I/O aggregators for consolidating Ethernet and FC I/O from the enclosure.
All in all, an attractive and flexible packaging scheme for infrastructure that needs to be tailored to specific combinations of server, storage and network configurations. Probably an ideal platform to support the Nutanix software suite that Dell is reselling as well. My guess is that other system design groups are thinking along these lines, but this is now a pretty unique package, and merits attention from infrastructure architects.
My colleagues Sophia Vargas, Michael Yamnitsky, and I have just published a new Quick Take report, "HP Announces Innovative Tools That Will Bridge Physical And Digital Worlds." Sophia and Michael have written about 3D printing for CIOs previously, and all three of us are interested in how computing and printing technologies can inform the BT Agenda of technology managers.
Fresh off of the announcement that HP will split into two publicly owned companies, one of those new entities -- HP Inc, the personal computing and printing business -- announced its vision for the future with two new products that help users cross the divide between physical and digital. The Multi-Jet Fusion 3D printer represents HP's long-awaited entry into 3D printing, with disruptively improved speed and quality compared to existing market entries. The sprout desktop PC combines a 3D scanner with a touchscreen monitor, touchscreen display mat, and specialized software that allows users to scan real objects, then manipulate them easily in digital format.
In both cases, a video demonstration helps you to really grok what the product is about.
CNET posted a video tour of the Multi-Jet Fusion 3D printer on Youtube:
On October 20 at TechEd, Microsoft quietly slipped in what looks like a potential game-changing announcement in the private/hybrid cloud world when they rolled out Microsoft Cloud Platform System (CPS), an integrated hardware/software system that combines an Azure-consistent on premise cloud with an optimized hardware stack from Dell.