Just three months after SAP acquired SuccessFactors, a cloud leader for human capital management solutions, for $3.4 billion, it has now announced the acquisition of Ariba, a cloud leader for eProcurement solutions, for another $4.3 billion. Now, $7.7 billion is a lot of money to spend in a short amount of time on two companies that hardly make any profit. But it’s all for the cloud, which means it’s for the future business opportunity in cloud computing services. So far, so good; SAP has invested and acquired quite a number of cloud companies over the past years: Frictionless, Clear Standards, Crossgate, etc. The difference in this most recent acquisition is the big overlap with existing solutions and internal R&D.
Following the first wave of cloud acquisitions, SAP was sitting amid a zoo of cloud solutions, all based on different platforms: ePurchasing, CRM-OnDemand, BI-OnDemand, Carbon Impact, ByDesign, Streamwork . . . They all used very different technology, resulting in big integration and scale challenges behind the scenes. The market welcomed with open arms SAP’s announcement 1.5 years ago that it would consolidate its cloud strategy on the new NetWeaver platform for both ABAP- and Java-based cloud solutions.
On May 15, 2012, the Infocomm Development Authority (IDA) of Singapore announced that it would award its much-awaited externally hosted g-cloud infrastructure five-year tender to SingTel. My colleague Jennifer Belissent and I published a report on g-cloud opportunities in Asia Pacific late last year that highlighted Singapore as one of the governments leading the way toward g-cloud adoption in the region.
Some key highlights from the Singapore g-cloud contract:
SingTel will be responsible for all of the capex- and opex-related costs needed to build and manage the central infrastructure from its own data center in Singapore.
Singtel will provide a central “G-Cloud Service Portal” to all government organizations and departments to access central g-cloud services (computing, storage, database, archiving, networking, and other basic resources) and derive revenue based on a subscription model.
The Singapore government has not committed to any particular minimum g-cloud usage level.
SingTel will provide the required training to government departments on g-cloud functioning.
SaaS vendors must collect customer insights for innovation and compliance.
As of the end of last year, about 30% of companies from our Forrsights Software Survey, Q4 2011, were using some software-as-a-service (SaaS) solution; that number will grow to 45% by the end of 2012 and 60% by the end of 2013. The public cloud market for SaaS is the biggest and fastest-growing of all of the cloud markets ($33 billion in 2012, growing to $78 billion by the end of 2015).
However, most of this growth is based on the cannibalization of the on-premises software market; software companies need to build their cloud strategy or risk getting stuck in the much slower-growing traditional application market and falling behind the competition. This is no easy task, however. Implementing a cloud strategy involves a lot of changes for a software company in terms of products, processes, and people.
A successful SaaS strategy requires an open architecture (note: multitenancy is not a prerequisite for a SaaS solution from a definition point of view but is highly recommended for vendors for better scale) and a flexible business model that includes the appropriate sales incentive structure that will bring the momentum to the street. For the purposes of this post, I’d like to highlight the challenge that software vendors need to solve for sustainable growth in the SaaS market: maintaining and increasing customer insights.
In the latest evolution of its Linux push, IBM has added to its non-x86 Linux server line with the introduction of new dedicated Power 7 rack and blade servers that only run Linux. “Hah!” you say. “Power already runs Linux, and quite well according to IBM.” This is indeed true, but when you look at the price/performance of Linux on standard Power, the picture is not quite as advantageous, with the higher cost of Power servers compared to x86 servers offsetting much if not all of the performance advantage.
Enter the new Flex System p24L (Linux) Compute Node blade for the new PureFlex system and the IBM PowerLinuxTM 7R2 rack server. Both are dedicated Linux-only systems with 2 Power 7 6/8 core, 4 threads/core processors, and are shipped with unlimited licenses for IBM’s PowerVM hypervisor. Most importantly, these systems, in exchange for the limitation that they will run only Linux, are priced competitively with similarly configured x86 systems from major competitors, and IBM is betting on the improvement in performance, shown by IBM-supplied benchmarks, to overcome any resistance to running Linux on a non-x86 system. Note that this is a different proposition than Linux running on an IFL in a zSeries, since the mainframe is usually not the entry for the customer — IBM typically sells to customers with existing mainframe, whereas with Power Linux they will also be attempting to sell to net new customers as well as established accounts.
Over the last couple of years, IBM, despite having a rich internal technology ecosystem and a number of competitive blade and CI offerings, has not had a comprehensive integrated offering to challenge HP’s CloudSystem Matrix and Cisco’s UCS. This past week IBM effectively silenced its critics and jumped to the head of the CI queue with the announcement of two products, PureFlex and PureApplication, the results of a massive multi-year engineering investment in blade hardware, systems management, networking, and storage integration. Based on a new modular blade architecture and new management architecture, the two products are really more of a continuum of a product defined by the level of software rather than two separate technology offerings.
PureFlex is the base product, consisting of the new hardware (which despite having the same number of blades as the existing HS blade products, is in fact a totally new piece of hardware), which integrates both BNT-based networking as well as a new object-based management architecture which can manage up to four chassis and provide a powerful setoff optimization, installation, and self-diagnostic functions for the hardware and software stack up to and including the OS images and VMs. In addition IBM appears to have integrated the complete suite of Open Fabric Manager and Virtual Fabric for remapping MAC/WWN UIDs and managing VM networking connections, and storage integration via the embedded V7000 storage unit, which serves as both a storage pool and an aggregation point for virtualizing external storage. The laundry list of features and functions is too long to itemize here, but PureFlex, especially with its hypervisor-neutrality and IBM’s Cloud FastStart option, is a complete platform for an enterprise private cloud or a horizontal VM compute farm, however you choose to label a shared VM utility.
Last week I attended Telefónica’s leadership event, which is held annually in Miami, reflecting its very strong basis in the Americas. This year’s event attracted around 700 visitors from 130 countries, comprising Telefónica’s customers, vendor partners, and analysts. There were several external keynote speakers, like the CIO of the US government, futurologist Michio Kaku, and the chief economist of the Economist Intelligence Unit, that outlined the macro context for society and the economy over the coming 10 to 20 years. Presentations by partners like Huawei, Microsoft, Nokia, amdocs, and Samsung highlighted visions of the future from a vendor angle. Telefónica itself used the opportunity to present its own vision of how technological progress will affect society and business — and how it intends to address the opportunities and challenges ahead.
Telefónica stands out from its peer group of incumbent telcos by having revamped its overall organizational structure. The firm had already announced this new structure last fall; it effectively sets up one division that focuses on global internal administration and procurement (Global Resources), one division that focuses on emerging Internet-based solutions (Digital), and two geographically focused go-to-market-facing business lines (Americas and Europe). Telefónica Multinational Solutions is part of Global Resources and is the division dedicated to delivering services to the MNC segment.
Deloitte continues to ramp up its software-as-a-service (SaaS) consulting practice, both through organic growth as well as acquisition. Today, Deloitte announced plans to acquire Workday implementation specialist Aggressor. Aggressor has been one of a very small set of Workday integrators (along with Deloitte), which means Deloitte now further boosts its already-impressive Workday practice.
This move furthers Deloitte’s Workday practice, as well as Deloitte’s overall practice in SaaS implementation and integration work. Deloitte also has strategic partnerships with other leading SaaS vendors, most notably salesforce.com.
For buyers, this means a stronger and deeper bench of consultants at Deloitte. But, on the downside, it removes a boutique/specialist option from the market, which appealed to some because of its laser focus, smaller size, and (perceived or real) ability to be more nimble, flexible, and price competitive.
Are you an Aggressor or Deloitte client or prospect? We would love to hear your thoughts!
Today we see two basic flavors of cloud IAM. One archetype is the model offered by Covisint, VMware Horizon, Symplified, Okta, OneLogin, etc.: these vendors provide relatively tight integration, but less capable identity services based on their respective firm's own intellectual property. Because of the above, these offerings clearly have a short implementation time. The other camp of vendors believes in providing hosted services of "legacy" IAM products: CA Technologies coming out with CloudMinder, Lighthouse adding their own IP to IBM TIM/TAM, Simeio Solutions blending OpenAM and Oracle's identity stack with their own secret sauce, and Verizon Business using NetIQ'sIDM stack as a basis for their hosted offering solution.
On Monday, February 13, HP announced its next turn of the great wheel for servers with the announcement of its Gen8 family of servers. Interestingly, since the announcement was ahead of Intel’s official announcement of the supporting E5 server CPUs, HP had absolutely nothing to say about the CPUs or performance of these systems. But even if the CPU information had been available, it would have been a sideshow to the main thrust of the Gen8 launch — improving the overall TCO (particularly Opex) of servers by making them more automated, more manageable, and easier to remediate when there is a problem, along with enhancements to storage, data center infrastructure management (DCIM) capabilities, and a fundamental change in the way that services and support are delivered.
With a little more granularity, the major components of the Gen8 server technology announcement included:
Onboard Automation – A suite of capabilities and tools that provide improved agentless local intelligence to allow quicker and lower labor cost provisioning, including faster boot cycles, “one click” firmware updates of single or multiple systems, intelligent and greatly improved boot-time diagnostics, and run-time diagnostics. This is apparently implemented by more powerful onboard management controllers and pre-provisioning a lot of software on built-in flash memory, which is used by the onboard controller. HP claims that the combination of these tools can increase operator productivity by up to 65%. One of the eye-catching features is an iPhone app that will scan a code printed on the server and go back through the Insight Management Environment stack and trigger the appropriate script to provision the server.[i]Possibly a bit of a gimmick, but a cool-looking one.