I’ve written and commented in the past about the inevitability of a new class of infrastructure called “composable”, i.e. integrated server, storage and network infrastructure that allowed its users to “compose”, that is to say configure, a physical server out of a collection of pooled server nodes, storage devices and shared network connections.[i]
The early exemplars of this class were pioneering efforts from Egenera and blade systems from Cisco, HP, IBM and others, which allowed some level of abstraction (a necessary precursor to composablity) of server UIDs including network addresses and storage bindings, and introduced the notion of templates for server configuration. More recently the Dell FX and the Cisco UCS M-Series servers introduced the notion of composing of servers from pools of resources within the bounds of a single chassis.[ii] While innovative, they were early efforts, and lacked a number of software and hardware features that were required for deployment against a wide spectrum of enterprise workloads.
This morning, HPE put a major marker down in the realm of composable infrastructure with the announcement of Synergy, its new composable infrastructure system. HPE Synergy represents a major step-function in capabilities for core enterprise infrastructure as it delivers cloud-like semantics to core physical infrastructure. Among its key capabilities:
The acquisition of EMC by Dell has is generating an immense amount of hype and prose, much of it looking forward at how the merged entity will try and compete in cloud, integrate and rationalize its new product line, and how Dell will pay for it (see Forrester report “Quick Take: Dell Buys EMC, Creating a New Legacy Vendor”). Interestingly not a lot has been written about the changes in the fundamental competitive faceoff between Dell and HP, both newly transformed by divestiture and by acquisition.
Yesterday the competition was straightforward and relatively easy to characterize. HP is the dominant enterprise server vendor, Dell a strong challenger, both with PCs and both with some storage IP that was good but in no sense dominant. Both have competent data center practices and embryonic cloud strategies which were still works in process. Post transformation we have a totally different picture with two very transformed companies:
A slimmer HP. HP is smaller (although $50B is not in any sense a small company), and bereft of its historical profit engine, the margins on its printer supplies. Free to focus on its core mandate of enterprise systems, software and services, HP Enterprise is positioning itself as a giant startup, focused and agile. Color me slightly skeptical but willing to believe that it can’t be any less agile than its precursor at twice the size. Certainly along with the margin contribution they lose the option to fight about budget allocations between enterprise and print/PC priorities.
I believe that network-as-a-service-type offerings — where customers can control the provisioning and characteristics of their network transport services — will have a long-term impact on those enterprises undertaking digital transformation. Businesses that fail to recognize the significance of quality network infrastructure will undermine their digital business strategy. Secure, stable network connectivity is a prerequisite for using cloud, mobile, big data, and Internet-of-Things (IoT) solutions. As the business technology (BT) agenda gains momentum, CIOs are looking to technologies like virtualization and cloud that create agility by dynamically responding to business conditions. Network infrastructure has been a laggard on this score — until now.
AT&T has unveiled its solution, Network on Demand. It’s the basis for a new category of services aligned with customer requirements, including self-service access, control, and configuration of network bandwidth and features like security, routing, and load balancing. Network on Demand:
Gives customers control of network services. Network on Demand offers a completely different customer experience regarding network provisioning. Near-real-time provisioning via a self-service portal makes the customer’s network responsive to business needs.
To help security pros plan their next decade of investments in data security, last year myself, John Kindervag, and Heidi Shey, researched and assessed 20 of the key technologies in this market using Forrester's TechRadar methodology. The resulting report, TechRadar™: Data Security, Q2 2014, became one of the team’s most read research for the year. However, it’s been a year since we finalized and published our research and it’s time for a fresh look.
One can argue that the entirety of the information security market - its solutions, services, and the profession itself - focuses on the security of data. While this is true, there are solutions that focus on securing the data itself or securing access to the data itself - regardless of where data is stored or transmitted or the user population that wants to use it. As S&R pros continue to pursue a shift from a perimeter and device-specific security approach to a more data- and identity-centric security approach, it’s worthwhile to hyper focus on the technology solutions that allow you to do just that....
Last year, we included the following 20 technologies in our research:
Often considered the poster child of digital transformation, APIs are proliferating at enterprises making industry-leading investments in mobile, IoT, and big data. As these initiatives mature, CIOs, CTOs, and heads of development are coming together with business leaders to manage and secure companywide use of APIs using API management solutions.
Forrester recently released a report that sizes and projects annual spending on API management solutions. We predict US companies alone will spend nearly $3 billion on API management over the next five years. Annual spend will quadruple by the end of the decade, from $140 million in 2014 to $660 million in 2020. International sales will take the global market over the billion dollar mark.
In interviewing vendors for this piece of research, we discovered a vast and fertile landscape of participants:
Startups have taken $430 million in venture funding, and so far have realized $335 million in acquisition value. In April 2015, pure-play vendor Apigee went IPO and currently trades at a valuation north of $400 million.
The cloud is not just reshaping how companies provision technology; it's changing customers' experience. A technology platform that is easily scalable for and accessible to the billions of connected devices customers use — PCs, smartphones, tablets, TVs, cars, jet engines, and more — has allowed cloud-services companies to completely reinvent experiences. No one was using black-car drivers' idle time to disrupt the taxi industry on a mass scale prior to Uber. Millions of customers, both consumers and business clients, have flocked to these cloud services, believing these are better experiences. The proof? The cloud computing elder Amazon is a perennial leader in Forrester's Customer Experience Index and has a market capitalization of more than $200 billion. So, the question you're probably asking is, "Does this mean that we need to build our customer interaction points in the cloud?"
This is a guest post by Fraser Tibbetts, Researcher on the AD&D team covering sales force automation software.
Oracle’s first ever Modern CX Conference in Las Vegas last week, with roughly 3,000 attendees, focused on Oracle’s vision for the CX Cloud suite of products. Instead of the usual focus on technology, executives focused on products that recognize how the customer has more power than ever. This aligns with Forrester's age of the customer research. It is encouraging to hear that same message from Oracle’s CEO, Mark Hurd, and from the Oracle product team leads.
Have you seen the movie Birdman — the one that just won the Best Picture and Best Director Oscars? It’s about a middle-aged man who was once a popular movie star but has been criticized throughout his career and how he finally achieved a breakthrough performance and found great success in a Broadway production of the play What We Talk About When We Talk About Love.
The story of Microsoft Azure is similar. Microsoft was hugely popular in the age of the PC but has sailed into troubled waters in the cloud era. But now — a year after Azure’s commercial launch in China — CIOs and EA professionals must understand how and where Azure might impact their existing MSFT technology investments to achieve business transformation. Azure is one of the leading forces driving cloud adoption in China. We attribute this to the progress that Microsoft has made by:
Expanding product offerings.Microsoft Azure now has local products in four key categories: compute, network, data, and application. Beyond basic components like virtual machines, websites, storage, and content delivery networks, Azure also has advanced features that are important for Chinese customers to address their unique challenges, including mobile services for the rapid development of mobile apps to accommodate the massive shift to mobile; a service bus for integration to eliminate information silos in the cloud; and HDInsight for big data capabilities to gain the customer insights necessary to compete with digital disruption from local Internet companies.
There’s a renewed interest in integration technologies due to new needs for integration to mobile, the Internet of Things (IoT), and cloud — but also because integration requirements betwen systems of engagement and systems of record are requiring realtime for seamless boundaries omnichannel, higher volume, with end-to-end security highlight the changes in integration practices. Forrester will soon publish a report about the integration trends around these subjects.
I am happy to pick up this subject again from Stefan Ried after being away from the space for the past six years. Stefan left Forrester in December and I regret his departure, because he was a very passionate analyst and a smart guy to work with.
On one level, IBM’s new z13, announced last Wednesday in New York, is exactly what the mainframe world has been expecting for the last two and a half years – more capacity (a big boost this time around – triple the main memory, more and faster cores, more I/O ports, etc.), a modest boost in price performance, and a very sexy cabinet design (I know it’s not really a major evaluation factor, but I think IBM’s industrial design for its system enclosures for Flex System, Power and the z System is absolutely gorgeous, should be in the MOMA*). IBM indeed delivered against these expectations, plus more. In this case a lot more.
In addition to the required upgrades to fuel the normal mainframe upgrade cycle and its reasonably predictable revenue, IBM has made a bold but rational repositioning of the mainframe as a core platform for the workloads generated by mobile transactions, the most rapidly growing workload across all sectors of the global economy. What makes this positioning rational as opposed to a pipe-dream for IBM is an underlying pattern common to many of these transactions – at some point they access data generated by and stored on a mainframe. By enhancing the economics of the increasingly Linux-centric processing chain that occurs before the call for the mainframe data, IBM hopes to foster the migration of these workloads to the mainframe where its access to the resident data will be more efficient, benefitting from inherently lower latency for data access as well as from access to embedded high-value functions such as accelerators for inline analytics. In essence, IBM hopes to shift the center of gravity for mobile processing toward the mainframe and away from distributed x86 Linux systems that they no longer manufacture.