Is it time to shift from Dev-to-Cloud to Enterprise-to Cloud?

James Staten
The majority of large enterprises are using cloud platforms now but few have shifted this use from their DevOps team over to central IT — but will in the next 1-2 years. When you do, you should quickly get your networking team involved as most of the Dev-to-Cloud connections that have been put in place by your developers may not meet your corporate security or WAN performance standards. This is a key finding in the latest report from myself and Andre Kindness that is now available to clients at Forrester.com.
 
As you no doubt know by now, from reading our research, cloud use is not an isolated activity. Most applications built in the cloud are native hybrid, meaning they connect to something outside the cloud. Most commonly these applications reach back into your corporate data center to talk to systems of record, such as databases, CRM or ERP systems or other key corporate resources. The connections established most often by these developers are public links secured with SSL or VPN constructs. These are easy to establish by the developers but are often set up without the QoS or security controls your networking teams have established for other corporate WAN links. So if you want consistency in your WAN policies, it’s time to get the networking experts involved. 
 
Read more

Rethinking Analytics Infrastructure

Richard Fichera

Last year I published a reasonably well-received research document on Hadoop infrastructure, “Building the Foundations for Customer Insight: Hadoop Infrastructure Architecture”. Now, less than a year later it’s looking obsolete, not so much because it was wrong for traditional (and yes, it does seem funny to use a word like “traditional” to describe a technology that itself is still rapidly evolving and only in mainstream use for a handful of years) Hadoop, but because the universe of analytics technology and tools has been evolving at light-speed.

If your analytics are anchored by Hadoop and its underlying map reduce processing, then the mainstream architecture described in the document, that of clusters of servers each with their own compute and storage, may still be appropriate. On the other hand, if, like many enterprises, you are adding additional analysis tools such as NoSQL databases, SQL on Hadoop (Impala, Stinger, Vertica) and particularly Spark, an in-memory-based analytics technology that is well suited for real-time and streaming data, it may be necessary to begin reassessing the supporting infrastructure in order to build something that can continue to support Hadoop as well as cater to the differing access patterns of other tools sets. This need to rethink the underlying analytics plumbing was brought home by a recent demonstration by HP of a reference architecture for analytics, publicly referred to as the HP Big Data Reference Architecture.

Read more

IBM Amps up the Mainframe and Aggressively Targets Mobile Workloads with new z13 Announcement

Richard Fichera

On one level, IBM’s new z13, announced last Wednesday in New York, is exactly what the mainframe world has been expecting for the last two and a half years – more capacity (a big boost this time around – triple the main memory, more and faster cores, more I/O ports, etc.), a modest boost in price performance, and a very sexy cabinet design (I know it’s not really a major evaluation factor, but I think IBM’s industrial design for its system enclosures for Flex System, Power and the z System is absolutely gorgeous, should be in the MOMA*). IBM indeed delivered against these expectations, plus more. In this case a lot more.

In addition to the required upgrades to fuel the normal mainframe upgrade cycle and its reasonably predictable revenue, IBM has made a bold but rational repositioning of the mainframe as a core platform for the workloads generated by mobile transactions, the most rapidly growing workload across all sectors of the global economy. What makes this positioning rational as opposed to a pipe-dream for IBM is an underlying pattern common to many of these transactions – at some point they access data generated by and stored on a mainframe. By enhancing the economics of the increasingly Linux-centric processing chain that occurs before the call for the mainframe data, IBM hopes to foster the migration of these workloads to the mainframe where its access to the resident data will be more efficient, benefitting from inherently lower latency for data access as well as from access to embedded high-value functions such as accelerators for inline analytics. In essence, IBM hopes to shift the center of gravity for mobile processing toward the mainframe and away from distributed x86 Linux systems that they no longer manufacture.

Read more

Which Public Cloud Platform Is Right For You - Round Two

James Staten
Determining which public cloud platforms your company should standardize on is not a matter of marketshare, size or growth rate. What matters most is fit for purpose - yours. And that’s exactly what our latest Forrester Wave of this market helps you determine. 
 
And the key questions to ask have nothing to do with the vendors in question. They are all about you - your team’s skill sets, needs and requirements. Will you mostly be building lightweight web and mobile applications from common web services you’d rather not recreate yourself? What skills do your developers bring to the problem - deep knowledge of Java and C# but light on the infrastructure configuration and middleware management front? Need to ensure data residency in specific geographies? Compliancy top your concerns list? These factors are far more important than feature by feature comparisons. Ultimately your platform selection needs to match your business requirements, and if our surveys can be trusted, you desire agility and developer productivity over most other concerns.
 
Where Amazon Web Services may best suit your DevOps teams with strong desire to control everything themselves, your web properties team may be far more productive on Mendix or Outsystems.
 
Read more

Mainframe Futures – Reading the Tea Leaves for Future Investments

Richard Fichera

I’ve been getting a steady trickle of inquires this year about the future of the mainframe from our enterprise clients. Most of them are more or less in the form of “I have a lot of stuff running on mainframes. Is this a viable platform for the next decade or is IBM going to abandon them.” I think the answer is that the platform is secure, and in the majority of cases the large business-critical workloads that are currently on the mainframe probably should remain on the mainframes. In the interests of transparency I’ve tried to lay out my reasoning below so that you can see if it applies to your own situation.

How Big is the Mainframe LOB?

It's hard to get exact figures for the mainframe contributions to IBM's STG (System & Technology Group) total revenues, but the data they have shared shows that their mainframe revenues seem to have recovered from the declines of previous quarters and at worst flattened. Because the business is inherently somewhat cyclical, I would expect that the next cycle of mainframes, rumored to be arriving next year, should give them a boost similar to the last major cycle, allowing them to show positive revenues next year.

Read more

Bare Metal Clouds – Performance and Isolation Drive Consideration

Richard Fichera

I’ve been talking to a number of users and providers of bare-metal cloud services, and am finding the common threads among the high-profile use cases both interesting individually and starting to connect some dots in terms of common use cases for these service providers who provide the ability to provision and use dedicated physical servers with very similar semantics to the common VM IaaS cloud – servers that can be instantiated at will in the cloud, provisioned with a variety of OS images, be connected to storage and run applications. The differentiation for the customers is in behavior of the resulting images:

  • Deterministic performance – Your workload is running on a dedicated resource, so there is no question of any “noisy neighbor” problem, or even of sharing resources with otherwise well-behaved neighbors.
  • Extreme low latency – Like it or not, VMs, even lightweight ones, impose some level of additional latency compared to bare-metal OS images. Where this latency is a factor, bare-metal clouds offer a differentiated alternative.
  • Raw performance – Under the right conditions, a single bare-metal server can process more work than a collection of VMs, even when their nominal aggregate performance is similar. Benchmarking is always tricky, but several of the bare metal cloud vendors can show some impressive comparative benchmarks to prospective customers.
Read more

Shifting Sands – Changing Alliances Underscore the Dynamism of the Infrastructure Systems Market

Richard Fichera

There is always a tendency to regard the major players in large markets as being a static background against which the froth of smaller companies and the rapid dance of customer innovation plays out. But if we turn our lens toward the major server vendors (who are now also storage and networking as well as software vendors), we see that the relatively flat industry revenues hide almost continuous churn. Turn back the clock slightly more than five years ago, and the market was dominated by three vendors, HP, Dell and IBM. In slightly more than five years, IBM has divested itself of highest velocity portion of its server business, Dell is no longer a public company, Lenovo is now a major player in servers, Cisco has come out of nowhere to mount a serious challenge in the x86 server segment, and HP has announced that it intends to split itself into two companies.

And it hasn’t stopped. Two recent events, the fracturing of the VCE consortium and the formerly unthinkable hook-up of IBM and Cisco illustrate the urgency with which existing players are seeking differential advantage, and reinforce our contention that the whole segment of converged and integrated infrastructure remains one of the active and profitable segments of the industry.

EMC’s recent acquisition of Cisco’s interest in VCE effectively acknowledged what most customers have been telling us for a long time – that VCE had become essentially an EMC-driven sales vehicle to sell storage, supported by VMware (owned by EMC) and Cisco as a systems platform. EMC’s purchase of Cisco’s interest also tacitly acknowledges two underlying tensions in the converged infrastructure space:

Read more

Exploring The Data Economy Opportunity: Some Do's and Don'ts

Jennifer Belissent, Ph.D.

An inquiry call from a digital strategy agency advising a client of theirs on data commercialization generated a lively discussion on strategies for taking data to market.  With few best practices out there, the emerging opportunity just might feel like space exploration – going boldly where no man has gone before.  The question is increasingly common. "We know we have data that would be of use to others but how do we know?  And, which use cases should we pursue?" In It's Time To Take Your Data To Market published earlier this fall, my colleagues and I provided some guideance on identifying and commercializing that "Picasso in the attic."  But the ideas around how to go-to-market continue to evolve. 

In answer to the inquiry questions asked the other day, my advice was pretty simple: Don’t try to anticipate all possible uses of the data.  Get started by making selected data sets available for people to play with, see what it can do, and talk about it to spread the word.  However, there are some specific use cases that can kick-start the process. 

Look to your existing customers.

The grass is not always greener, and your existing clients might just provide some fertile ground.  A couple thoughts on ways your existing customers could use new data sources:

Read more

Dell Introduces FX system - the Shape of Infrastructure to Come?

Richard Fichera

Dell today announced its new FX system architecture, and I am decidedly impressed.

Dell FX is a 2U flexible infrastructure building block that allows infrastructure architects to compose an application-appropriate server and storage infrastructure out of the following set of resources:

  • Multiple choices of server nodes, ranging from multi-core Atom to new Xeon E5 V3 servers. With configurations ranging from 2 to 16 server nodes per enclosure, there is pretty much a configuration point for most mainstream applications.
  • A novel flexible method of mapping disks from up to three optional disk modules, each with 16 drives - the mapping, controlled by the onboard management, allows each server to appear as if the disk is locally attached DASD, so no changes are needed in any software that thinks it is accessing local storage. A very slick evolution in storage provisioning.
  • A set of I/O aggregators for consolidating Ethernet and FC I/O from the enclosure.

All in all, an attractive and flexible packaging scheme for infrastructure that needs to be tailored to specific combinations of server, storage and network configurations. Probably an ideal platform to support the Nutanix software suite that Dell is reselling as well. My guess is that other system design groups are thinking along these lines, but this is now a pretty unique package, and merits attention from infrastructure architects.

Forrester clients, I've published a Quick Take report on this, Quick Take: Dell's FX Architecture Holds Promise To Power Modern Services

The IBM/Tencent China Partnership: The New Dance Of The Elephants

Charlie Dai

On October 31, IBM and Tencent announced that they will work together to extend Tencent’s public cloud platform to the enterprise by building and marketing an industry-oriented public cloud.

Don’t be fooled into looking at this move in isolation. With this partnership, IBM is turning to a new page of its transformation in China, responding to the challenges of a stricter regulatory environment, an increasingly consumerized technology landscape, and newly empowered customers. The move is a crucial milestone in IBM’s strategy to localize its vision for cloud, analytics, mobile, and social (CAMS). IBM has had a strategic focus on CAMS solutions and is systematically building an ecosystem on four pillars:

  • Cloud and social. This is where IBM and Tencent are a perfect match. IBM’s cloud managed service, operated by its partner 21ViaNet, officially went live on September 23. It can support mission-critical applications like ERP and CRM solutions from SAP and Oracle from both the IaaS and SaaS perspective. This could help Tencent target large enterprise customers beyond its traditional base of small and medium-size businesses (SMBs) and startups by adding social value to ERP, CRM, and EAM applications.
Read more