Content In The Cloud Is The Next Frontier: IBM And Box Partner To Transform Work

Cheryl McKinnon

Today, IBM and Box announced a partnership and integration strategy to “transform work in the cloud." This is an interesting move that further validates Forrester’s view that the ECM market is transforming — largely due to new, often customer-activated, use cases. We also see that the current horizontal collaboration market is shifting to better target specific work output, as opposed to more general-purpose knowledge-dissemination use cases.

 

What does this partnership mean for IBM, Box, and their partners and customers?

 

For Box, the company gets important access to the extensive IBM ecosystem: Global Services, developer communities via IBM’s Bluemix platform, and the IBM-Apple MobileFirst relationship, as well as engineering acceleration to fill gaps in its content collaboration offering in areas such as capture, case management, governance, and analytics, including Watson.

Read more

OpenStack Is Moving To A New Stage

Charlie Dai

Unfortunately, visa issues prevented me from attending the OpenStack summit in Vancouver last week — despite submitting my application to the Canadian embassy in Beijing 40 days in advance! However after following extensive online discussions of the event and discussing it with vendors and peers, I would say that OpenStack is moving to a new phase, for two reasons:

  • The rise of containers is laying the foundation for the next level of enterprise readiness. Docker’s container technology has become a major factor in the evolution of OpenStack components. Docker drivers have been implemented for the key components of Nova and Heat for extended computing and orchestration capabilities, respectively. The Magnum project aiming at container services allows OpenStack to create clusters with Kubernetes (k8s) by Google and Swarm by Docker.com. The Murano project contributed by Mirantis aiming at application catalog services is also integrated with k8s.
Read more

Cloud Foundry Is Evolving Toward Agility Via Container-Empowered Micro-services

Charlie Dai

The Cloud Foundry Foundation held its 2015 Summit recently in Santa Clara, attracting 1,500 application developers, operation experts, technical and business managers, service providers, and community contributors. After listening to the presentations and discussions, I believe that Cloud Foundry —one of the major platform-as-a-service (PaaS) offerings —is making a strategic shift from its traditional focus on application staging and execution to a new emphasis on micro-service composition. This is a key factor that will help companies gain the agility they need for both technology management and business transformation. Here’s what I learned:

  • Containers are critical for micro-service-based agility. Container based micro-services are getting momentum: IBM presented their latest Bluemix UI micro-services architecture; while SAP introduced their latest practice on Docker. Containers can encapsulate fine-grained business logic as micro-services for dynamic composition, which will greatly simplify development and deployment of applications, helping firms achieve continuous delivery to meet dynamic business requirements. This is why Forrester believes that the combination of containers and micro-services will prove irresistible for developers.
Read more

The Forrester Wave™ Evaluation Of Functional Test Automation (FTA) Is Out And It's All About Going Beyond GUI Testing

Diego Lo Giudice

A few months ago, I blogged about testing quality@speed in the same way that F1 racing teams do to win races and fans. Last week, I published my F(TA)1 Forrester Wave! It examines the capabilities of nine vendors to evaluate how they support Agile development and continuous delivery teams when it comes to continuous testing: Borland, CA Technologies, HP, IBM, Microsoft, Parasoft, SmartBear, TestPlant, and Tricentis. However, only Forrester clients can attend “the race” to see the leaders.

The market overview section of our evaluation complements the analysis in the underlying model by looking at other providers that either augment FTA capabilities, play in a different market segment, or did not meet one of the criteria for inclusion in the Forrester Wave. These include: 1) open source tools like Selenium and Sahi, 2) test case design and automation tools like Grid-Tools Agile Designer, and 3) other tools, such as Original Software, which mostly focuses on graphical user interface (GUI) and packaged apps testing, and Qualitia and Applitools, which focus on GUI and visualization testing.

We deliberately weighted the Forrester Wave criteria more heavily towards “beyond GUI” and API testing approaches. Why? Because:

Read more

Facebook and HP Show Different Visions for Web-scale

Richard Fichera

Recently we’ve had a chance to look again at two very conflicting views from HP and Facebook on how to do web-scale and cloud computing, both announced at the recent OCP annual event in California.

From HP come its new CloudLine systems, the public face of their joint venture with Foxcon. Early details released by HP show a line of cost-optimized servers descended from a conventional engineering lineage and incorporating selected bits of OCP technology to reduce costs. These are minimalist rack servers designed, after stripping away all the announcement verbiage, to compete with white-box vendors such as Quanta, SuperMicro and a host of others. Available in five models ranging from the minimally-featured CL1100 up through larger nodes designed for high I/O, big data and compute-intensive workloads, these systems will allow large installations to install capacity at costs ranging from 10 – 25% less than the equivalent capacity in their standard ProLiant product line. While the strategic implications of HP having to share IP and market presence with Foxcon are still unclear, it is a measure of HP’s adaptability that they were willing to execute on this arrangement to protect against inroads from emerging competition in the most rapidly growing segment of the server market, and one where they have probably been under immense margin pressure.

Read more

Rack-Scale Architectures get Real with Intel RSA Introduction

Richard Fichera

What Is It?

We have been watching many variants on efficient packaging of servers for highly scalable workloads for years, including blades, modular servers, and dense HPC rack offerings from multiple vendors, most of the highly effective, and all highly proprietary. With the advent of Facebook’s Open Compute Project, the table was set for a wave of standardized rack servers and the prospect of very cost-effective rack-scale deployments of very standardized servers. But the IP for intelligently shared and managed power and cooling at a rack level needed a serious R&D effort that the OCP community, by and large, was unwilling to make. Into this opportunity stepped Intel, which has been quietly working on its internal Rack Scale Architecture (RSA) program for the last couple of years, and whose first product wave was officially outed recently as part of an announcement by Intel and Ericsson.

While not officially announcing Intel’s product nomenclature, Ericsson announced their “HDS 8000” based on Intel’s RSA, and Intel representatives then went on to explain the fundamental of RSA, including a view of the enhancements coming this year.

RSA is a combination of very standardized x86 servers, a specialized rack enclosure with shared Ethernet switching and power/cooling, and layers of firmware to accomplish a set of tasks common to managing a rack of servers, including:

·         Asset discovery

·         Switch setup and management

·         Power and cooling management across the servers with the rack

·         Server node management

Read more

Rethinking Analytics Infrastructure

Richard Fichera

Last year I published a reasonably well-received research document on Hadoop infrastructure, “Building the Foundations for Customer Insight: Hadoop Infrastructure Architecture”. Now, less than a year later it’s looking obsolete, not so much because it was wrong for traditional (and yes, it does seem funny to use a word like “traditional” to describe a technology that itself is still rapidly evolving and only in mainstream use for a handful of years) Hadoop, but because the universe of analytics technology and tools has been evolving at light-speed.

If your analytics are anchored by Hadoop and its underlying map reduce processing, then the mainstream architecture described in the document, that of clusters of servers each with their own compute and storage, may still be appropriate. On the other hand, if, like many enterprises, you are adding additional analysis tools such as NoSQL databases, SQL on Hadoop (Impala, Stinger, Vertica) and particularly Spark, an in-memory-based analytics technology that is well suited for real-time and streaming data, it may be necessary to begin reassessing the supporting infrastructure in order to build something that can continue to support Hadoop as well as cater to the differing access patterns of other tools sets. This need to rethink the underlying analytics plumbing was brought home by a recent demonstration by HP of a reference architecture for analytics, publicly referred to as the HP Big Data Reference Architecture.

Read more

IBM Amps up the Mainframe and Aggressively Targets Mobile Workloads with new z13 Announcement

Richard Fichera

On one level, IBM’s new z13, announced last Wednesday in New York, is exactly what the mainframe world has been expecting for the last two and a half years – more capacity (a big boost this time around – triple the main memory, more and faster cores, more I/O ports, etc.), a modest boost in price performance, and a very sexy cabinet design (I know it’s not really a major evaluation factor, but I think IBM’s industrial design for its system enclosures for Flex System, Power and the z System is absolutely gorgeous, should be in the MOMA*). IBM indeed delivered against these expectations, plus more. In this case a lot more.

In addition to the required upgrades to fuel the normal mainframe upgrade cycle and its reasonably predictable revenue, IBM has made a bold but rational repositioning of the mainframe as a core platform for the workloads generated by mobile transactions, the most rapidly growing workload across all sectors of the global economy. What makes this positioning rational as opposed to a pipe-dream for IBM is an underlying pattern common to many of these transactions – at some point they access data generated by and stored on a mainframe. By enhancing the economics of the increasingly Linux-centric processing chain that occurs before the call for the mainframe data, IBM hopes to foster the migration of these workloads to the mainframe where its access to the resident data will be more efficient, benefitting from inherently lower latency for data access as well as from access to embedded high-value functions such as accelerators for inline analytics. In essence, IBM hopes to shift the center of gravity for mobile processing toward the mainframe and away from distributed x86 Linux systems that they no longer manufacture.

Read more

Mainframe Futures – Reading the Tea Leaves for Future Investments

Richard Fichera

I’ve been getting a steady trickle of inquires this year about the future of the mainframe from our enterprise clients. Most of them are more or less in the form of “I have a lot of stuff running on mainframes. Is this a viable platform for the next decade or is IBM going to abandon them.” I think the answer is that the platform is secure, and in the majority of cases the large business-critical workloads that are currently on the mainframe probably should remain on the mainframes. In the interests of transparency I’ve tried to lay out my reasoning below so that you can see if it applies to your own situation.

How Big is the Mainframe LOB?

It's hard to get exact figures for the mainframe contributions to IBM's STG (System & Technology Group) total revenues, but the data they have shared shows that their mainframe revenues seem to have recovered from the declines of previous quarters and at worst flattened. Because the business is inherently somewhat cyclical, I would expect that the next cycle of mainframes, rumored to be arriving next year, should give them a boost similar to the last major cycle, allowing them to show positive revenues next year.

Read more

Bare Metal Clouds – Performance and Isolation Drive Consideration

Richard Fichera

I’ve been talking to a number of users and providers of bare-metal cloud services, and am finding the common threads among the high-profile use cases both interesting individually and starting to connect some dots in terms of common use cases for these service providers who provide the ability to provision and use dedicated physical servers with very similar semantics to the common VM IaaS cloud – servers that can be instantiated at will in the cloud, provisioned with a variety of OS images, be connected to storage and run applications. The differentiation for the customers is in behavior of the resulting images:

  • Deterministic performance – Your workload is running on a dedicated resource, so there is no question of any “noisy neighbor” problem, or even of sharing resources with otherwise well-behaved neighbors.
  • Extreme low latency – Like it or not, VMs, even lightweight ones, impose some level of additional latency compared to bare-metal OS images. Where this latency is a factor, bare-metal clouds offer a differentiated alternative.
  • Raw performance – Under the right conditions, a single bare-metal server can process more work than a collection of VMs, even when their nominal aggregate performance is similar. Benchmarking is always tricky, but several of the bare metal cloud vendors can show some impressive comparative benchmarks to prospective customers.
Read more