The hordes gathered in Las Vegas this week, for Amazon's latest re:Invent show. Over 18,000 individuals queued to get into sessions, jostled to reach the Oreo Cookie Popcorn (yes, really), and dodged casino-goers to hear from AWS, its partners and its customers. Las Vegas may figure nowhere on my list of favourite places, but the programme of Analyst sessions AWS laid on for earlier in the week definitely justified this trip.
The headline items (the Internet of Things, Business Intelligence, and a Snowball chucked straight at the 'hell' that is the enterprise data centre (think about it)) are much-discussed, but in many ways the more interesting stuff was AWS' continued - quiet, methodical, inexorable - improvement of its current offerings. One by one, enterprise 'reasons' to avoid AWS or its public cloud competitors are being systematically demolished.
I bring tidings of great joy to the Forrester community, and especially to our clients! We have a new analyst on the Infrastructure & Operations Research team! It took a long time to get the right person, but we finally did. Once you meet him (and you likely already have), you will agree!
The newest Principal Analyst on the I&O team is Robert Stroud! Rob comes to us after a long stint at the software company CA, where he was most recently the VP of Strategy and Innovation. Central to his recent work is a significant amount of evangelism about DevOps, the hot movement promoting rapid application and technology service delivery. He has been very active in the governance and service management communities for years, holding many leadership positions. He just wrapped up his tenure as the International President of ISACA and was a primary author of the last few versions of the COBIT framework. He has won several awards in this community in recognition of his many achievements – all well deserved!
Ubiquitous public cloud services are making stronger strides into the world of business technology, and enterprises are increasingly looking to cloud services to help them succeed. Cloud services stretch across the business value chain, including ideation, prototyping, product development, business planning, go-to-market strategy, marketing, finance, and strategic growth. Consumption patterns vary by service. For the past few years, the businesses has owned certain services, in some cases without keeping their technology management teams in loop — AKA “shadow IT”. Every business unit engages in this behavior, each one sourcing the various services they use from multiple providers. As a result, today’s businesses face a complex array of cloud services, each with their own business functions and requirements. The emerging cloud landscape does not provide a “single pane of glass” for the tech management team and lacks a standard governance model across services. Finally, it does not allow firms to compare costs for a standard service, which could result in suboptimal spending. This situation is creating a need for what we call “cloud orchestration solutions”. Such a solution would provide:
A single window on all cloud services. It merges all required and approved service types from multiple cloud service providers into a single portal, much like the ITSM service portals that offered services built within an organization.
Information on the service provider most suited to a given workload.
Comparison of services across service providers.
Consistent governance models across services.
Control over service life cycles and thus service cost.
My colleague Henry Baltazar and I have been watching the development of new systems and storage technology for years now, and each of us has been trumpeting in our own way the future potential of new non-volatile memory technology (NVM) to not only provide a major leap for current flash-based storage technology but to trigger a major transformation in how servers and storage are architected and deployed and eventually in how software looks at persistent versus nonpersistent storage.
All well and good, but up until very recently we were limited to vague prognostications about which flavor of NVM would finally belly up to the bar for mass production, and how the resultant systems could be architected. In the last 30 days, two major technology developments, Intel’s further disclosure of its future joint-venture NVM technology, now known as 3D XPoint™ Technology, and Diablo Technologies introduction of Memory1, have allowed us to sharpen the focus on the potential outcomes and routes to market for this next wave of infrastructure transformation.
Sales organizations, for the last couple of decades, have used sales automation (SFA) to manage account and contact data, sales pipelines, territories and more – all inside-out capabilities that help optimize their productivity, The problem is that today, customers control the conversation that they have with companies. Customers increasingly demand effortless sales interactions that increasingly trend toward self-service. They demand interactions tailored to their particular industry, pain point, and profile. They want streamlined interactions that value their time, such as a simple, efficient quote-to-order process or a contract renewal process.
Today sales organizations struggle to provide sales experiences in-line with customer expectations. They cant:
Support buyers on their terms. Buyers increasingly leverage mobile touchpoints, self-service, and digital channels to interact with companies which sales organizations cannot support.
Get sales representatives to follow consistent processes. Sales managers have sales reps of different calibers, and they must up-level a team’s performance. Also, without a consistent sales process that clearly articulates conditions for the different stages, managers can’t accurately qualify their pipeline. This affects forecasts, valuation, and profitability.
Personalize conversations with stakeholders. Sales reps don’t have near real-time information about their prospect’s company or industry or about a particular stakeholder to make conversations more relevant. They may not understand relationships between stakeholders that are involved in a purchase. They often lack insight about the effectiveness of sales collateral for different stages of the sales journey.
Hello from the newest analyst serving Forrester Research’s CIO role. My name is Paul Miller, and I joined Forrester at the beginning of August. I am attached to Forrester’s London office, but it’s already clear that I’ll be working with clients across many time zones.
As my Analyst bio describes, my primary focus is on cloud computing, with a particular interest in the way that cloud-based approaches enable (or even require) organizations to embrace digital transformation of themselves and their customer relationships. Before joining Forrester, I spent six years as an independent analyst and consultant. My work spanned cloud computing and big data and I am sure that this broader portfolio of interests will continue into my Forrester research, particularly where I can explore the demonstrable value that these approaches bring to those who embrace them.
I am still working on the best way to capture and explain my research coverage, talking with many of my new colleagues, and learning about potential synergies between what they already do and what I could or should be doing. I know that the first document to appear with my name on it will be a CIO-friendly look at OpenStack, as the genesis of this new Brief lies in a report that I had to write as part of Forrester’s recruitment process. I have a long (long, long) list of further reports I am keen to get started on, and these should begin to appear online as upcoming titles in the very near future. I shall also be blogging here, and look forward to using this as a way to get shorter thoughts and perspectives online relatively quickly. I’ve been regularly blogging for work since early 2004, although too many of the blogs I used to write for are now only preserved in the vaults of Brewster Kahle’s wonderful Internet Archive.
To successfully grow in Asia Pacific (AP), you must excel at understanding customers’ needs, wants, and behaviors and have the capabilities necessary to transform this insight into improved customer engagement. But that’s true everywhere. What sets the AP region apart are the continued vast differences between markets. Appreciating these market differences, and the impact they have on customers’ expectations, is critical when sourcing enterprise marketing capabilities.
In the world of CMOS semiconductor process, the fundamental heartbeat that drives the continuing evolution of all the devices and computers we use and governs at a fundamantal level hte services we can layer on top of them is the continual shrinkage of the transistors we build upon, and we are used to the regular cadence of miniaturization, generally led by Intel, as we progress from one generation to the next. 32nm logic is so old-fashioned, 22nm parts are in volume production across the entire CPU spectrum, 14 nm parts have started to appear, and the rumor mill is active with reports of initial shipments of 10 nm parts in mid-2016. But there is a collective nervousness about the transition to 7 nm, the next step in the industry process roadmap, with industry leader Intel commenting at the recent 2015 International Solid State Circuit conference that it may have to move away from conventional silicon materials for the transition to 7 nm parts, and that there were many obstacles to mass production beyond the 10 nm threshold.
But there are other players in the game, and some of them are anxious to demonstrate that Intel may not have the commanding lead that many observers assume they have. In a surprise move that hints at the future of some of its own products and that will certainly galvanize both partners and competitors, IBM, discounted by many as a spent force in the semiconductor world with its recent divestiture of its manufacturing business, has just made a real jaw-dropper of an announcement – the existence of working 7nm semiconductors.
In a world where OS and low-level platform software is considered unfashionable, it was refreshing to see the Linux glitterati and cognoscenti descended on Boston for the last three days, 5000 strong and genuinely passionate about Linux. I spent a day there mingling with the crowds in the eshibit halls, attending some sessions and meeting with Red Hat management. Overall, the breadth of Red Hat’s offerings are overwhelming and way too much to comprehend ina single day or a handful of days, but I focused my attention on two big issues for the emerging software-defined data center – containers and the inexorable march of OpenStack.
Containers are all the rage, and Red Hat is firmly behind them, with its currently shipping RHEL Atomic release optimized to support them. The news at the Summit was the release of RHEL Atomic Enterprise, which extends the ability to execute and manage containers over a cluster as opposed to a single system. In conjunction with a tool stack such as Docker and Kubernates, this paves the way for very powerful distributed deployments that take advantage of the failure isolation and performance potential of clusters in the enterprise. While all the IP in RHEL Atomic, Docker and Kubernates are available to the community and competitors, it appears that RH has stolen at least a temporary early lead in bolstering the usability of this increasingly central virtualization abstraction for the next generation data center.
As companies get serious about digital transformation, we see investments shifting toward extensible software platforms used to build and manage a differentiated customer experience. My colleague John McCarthy has an excellent slide describing what's happening:
Before, tech management spent most of its time and budget managing a set of monolithic enterprise applications and databases. With an addressable market of a finite number of networked PCs, spending on the front end was largely an afterthought.
Today, applications must scale to millions, if not billions of connected devices while retaining a rich and seamless user experience. Infrastructure, in turn, must flex to meet these new specs. Since complete overhauls of the back end are a nonstarter for large enterprises with 30-plus years of investments in mainframes and legacy server systems, new investments gear toward the intermediary software platforms that connect digital touchpoints with enterprise applications and transaction systems.
At Forrester, we’ve been working to quantify some of the most viable software categories that exemplify this shift. A shortlist below:
· API management solutions: US CAGR 2015-2020: 22%.
· Public cloud platforms: Global CAGR 2015-2020: 30%. (Note: We have a forecast update in the works that segments the market into subcategories.)