Nathan Bedford Forrest, a Confederate general of despicable ideology and consummate tactics, spoke of “keepin up the skeer,” applying continued pressure to opponents to prevent them from regrouping and counterattacking. POWER7+, the most recent version of IBM’s POWER architecture, anticipated as a follow-up to the POWER7 for almost a year, was finally announced this week, and appears to be “keepin up the skeer” in terms of its competitive potential for IBM POWER-based systems. In short, it is a hot piece of technology that will keep existing IBM users happy and should help IBM maintain its impressive momentum in the Unix systems segment.
For the chip heads, the CPU is implemented in a 32 NM process, the same as Intel’s upcoming Poulson, and embodies some interesting evolutions in high-end chip design, including:
Use of DRAM instead of SRAM — IBM has pioneered the use of embedded DRAM (eDRAM) as embedded L3 cache instead of the more standard and faster SRAM. In exchange for the loss of speed, eDRAM requires fewer transistors and lower power, allowing IBM to pack a total of 80 MB (a lot) of shared L3 cache, far more than any other product has ever sported.
[For some reason this has been unpublished since April — so here it is well after AMD announced its next spin of the SeaMicro product.]
At its recent financial analyst day, AMD indicated that it intended to differentiate itself by creating products that were advantaged in niche markets, with specific mention, among other segments, of servers, and to generally shake up the trench warfare that has had it on the losing side of its lifelong battle with Intel (my interpretation, not AMD management’s words). Today, at least for the server side of the business, it made a move that can potentially offer it visibility and differentiation by acquiring innovative server startup SeaMicro.
SeaMicro has attracted our attention since its appearance (blog post 1, blog post 2) with its innovative architecture that dramatically reduces power and improves density by sharing components like I/O adapters, disks, and even BIOS over a proprietary fabric. The irony here is that SeaMicro came to market with a tight alignment with Intel, who at one point even introduced a special dual-core packaging of its Atom CPU to allow SeaMicro to improve its density and power efficiency. Most recently SeaMicro and Intel announced a new model that featured Xeon CPUs to address the more mainstream segments that were not a part of SeaMicro’s original Atom-based offering.
On Tuesday, September 4, Microsoft made the official announcement of Windows Server 2012, ending what has seemed like an interminable sequence of rumors, Beta releases, and endless speculation about this successor to Windows Server 2008.
So, is it worth the wait and does it live up to its hype? All omens point to a resounding “YES.”
Make no mistake, this is a really major restructuring of the OS, and a major step-function in capabilities aligned with several major strategic trends for both Microsoft and the rest of the industry. While Microsoft’s high level message is centered on the cloud, and on the Windows Server 2012 features that make it a productive platform upon which both enterprises and service providers can build a cost-effective cloud, its features will be immensely valuable to a wide range of businesses.
What It Does
The reviewers guide for Windows Server 2012 is over 220 pages long, and the OS has at least 100 features that are worth noting, so a real exploration of the features of this OS is way beyond what I can do here. Nonetheless, we can look at several buckets of technology to get an understanding of the general capabilities. Also important to note is that while Microsoft has positioned this as a very cloud-friendly OS, almost all of these cloud-related features are also very useful to an enterprise IT environment.
New file system — Included in WS2012 is ReFS, a new file system designed to survive failures that would bring down or corrupt the previous NTFS file system (which is still available). Combined with improvements in cluster management and failover, this is a capability that will play across the entire user spectrum.
I’ve been speaking to more and more clients lately who are not just saving money with cloud computing — they’re using the principles of the cloud to completely transform how they source, build, and deliver all IT services. Savvy I&O leaders should look beyond the per-hour savings promised by the cloud to the core tenets of cloud computing itself. How do the public clouds do it? Why can’t you?
Well, you can. You can transform your IT operating model from that of widget-provider to a true service-oriented business partner. Forrester writes extensively about how to make the IT to BT (business technology) transition. I recently spoke at length with the IT management team at Commonwealth Bank of Australia (CBA) about their multi-year IT transformation to what they call “everything-as-a-service.” I was put in touch with them by one of their primary suppliers, cloud service management and automation vendor ServiceMesh.
We’ll be publishing a complete case study soon, but I wanted to share some of the basics here because they outline a strategy anyone can achieve, regardless of your current level of cloud maturity. The bank started by establishing six core tenets to be enforced across all I&O services moving forward, whether hosted internally or externally. These guiding principles neatly summarize the core value dimensions of cloud computing itself:
Pay as you go. Business customers only pay for products and services actually used, on a metered, charge-back basis, under flexible service agreements, as opposed to fixed-term contracts.
With VMworld in full swing this week and Microsoft’s cloud-centered Windows Server 2012 launching soon after, your options for technology to build and deploy enterprise clouds is about to expand significantly. Meanwhile, Amazon continues to drop prices faster than your local Wal-Mart, introduce new cloud compute and storage services almost monthly, and has already gobbled up a trillion objects in S3. Is it time to start moving your workloads to the cloud?
Forrsights surveys show that companies are indeed moving to the cloud, primarily for speed and lower costs — but are the savings really there? The answer might not be obvious. Are you heavily virtualized already? Have you moved up the virtualization value chain beyond server consolidation to using virtual machines for better disaster recovery, less downtime, automated configuration management, and the like? Do you have a virtual-first policy and actively share resources across business units? If you run a mature virtual environment today, your internal infrastructure costs might already be competitive with the cloud.
Cloud Services Offer New Opportunities For Big Data Solutions
What’s better than writing about one hot topic? Well, writing about two hot topics in one blog post — and here you go:
The State Of BI In The Cloud
Over the past few years, BI business intelligence (BI) was the overlooked stepchild of cloud solutions and market adoption. Sure, some BI software-as-a-service (SaaS) vendors have been pretty successful in this space, but it was success in a niche compared with the four main SaaS applications: customer relationship management (CRM), collaboration, human capital management (HCM), and eProcurement. While those four applications each reached cloud adoption of 25% and more in North America and Western Europe, BI was leading the field of second-tier SaaS solutions used by 17% of all companies in our Forrester Software Survey, Q4 2011. Considering that the main challenges of cloud computing are data security and integration efforts (yes, the story of simply swiping your credit card to get a full operational cloud solution in place is a fairy tale), 17% cloud adoption is actually not bad at all; BI is all about data integration, data analysis, and security. With BI there is of course the flexibility to choose which data a company considers to run in a cloud deployment and what data sources to integrate — a choice that is very limited when implementing, e.g., a CRM or eProcurement cloud solution.
“38% of all companies are planning a BI SaaS project before the end of 2013.”
In November 2011, Atos and Yonyou (formerly Ufida) announced the creation of a joint venture dubbed Yunano™ aimed at the European SMB market. The two companies are at it again, this time focusing specifically on the Chinese domestic market. I recently met with Herbie Leung, CEO of Atos in Asia Pacific, to discuss the partnership and future market opportunities in China. This new agreement essentially covers three areas of collaboration:
Bringing PLM and MES expertise to Yonyou customers. With more than 1.5 million customers, Yonyou is one of the largest software providers in China with strengths in ERP and CRM solutions. However, the company lacks capabilities in adjacent areas like product lifecycle management (PLM) and manufacturing execution systems (MES). Following the SIS acquisition, Atos has significantly strengthened its capabilities in these domains and will offer them to Yonyou clients.
Helping Yonyou’s customers migrate to private cloud architectures. The lack of private cloud technical skills in China led Yonyou to leverage Atos’s expertise to develop private cloud assessment workshops and ERP migration services targeting the China market. Atos will in turn leverage Canopy, a company it recently created in partnership with EMC and VMware to provide cloud solutions to its clients globally.
Helping Yonyou expand into new markets in Asia. Like many Chinese companies, Yonyou has global aspirations.While theYunano joint venture focuses on bringing Yonyou’s ERP solutions to the mid-market in EMEA, the new partnership will leverage Atos go-to-market capabilities to take the Yonyou solutions to other markets in Asia.
I’ve participated in cloud events in four different countries over the past two weeks. Attendees were primarily senior and mid-level IT decision-makers seeking guidance and best practices for implementing private clouds within their organizations. Regardless of the country of origin, industry focus or level of cloud-related experience, one common theme stood out above all others during both formal and informal discussions – the importance of effective communication.
The key takeaway – don’t get dogmatic about terminology. In fact, when it comes to cloud-related initiatives, choose your words carefully and be prepared for the reaction you’re likely to get.
‘Cloud computing’ as a term remains over-hyped, over-used, and still often poorly understood – because of this, typical reactions to the term are likely to range from cynicism and doubt to defensiveness and derision and all the way to outright hostility. Ironically, the fact that it’s not a technical term actually creates more confusion in many instances since its meaning is so general as to apply to practically anything (or nothing, depending on your point of view or perhaps your level of cynicism).
At all four events over the past two weeks – and in fact in nearly all discussions of IT priorities I’ve had over the past six months – CIOs and other senior IT decision-makers have consistently made clear that ‘cloud computing’ as a general objective or direction isn’t a top priority per se. However, they are unanimous in their belief that data center transformation is essential to supporting business requirements and expectations.
Last year, my colleague, James Staten, and I published evaluations of the (internal) private cloud and public cloud markets — this year we’re going to fill in the remaining gap in the IaaS space, by publishing a Forrester Wave evaluation on Hosted Private Cloud Solutions. Vendors participating in this report will be evaluated on key criteria, a demo following a mandatory script, and customer references for validation of the solution. Throughout the research process I’ll be providing some updates and interesting findings before it goes live in early Q4 2012.
So, what is hosted private cloud? Like almost every product in the cloud space, there’s a lot of ambiguity about what you’ll be getting if you sign on to use a hosted private cloud solution. Today, NIST defines private cloud as:
The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.
Hosted private cloud refers to a variation of this where the solution lives off-premises in a hosted environment while still incorporating NIST's IaaS service definition, particularly where “[t]he consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications.” But there’s a great deal of variation in today’s hosted private cloud arena. Usually solutions differ in the following ways:
During a recent global analyst event in Paris, Capgemini presented its strategy to a panel of market and financial analysts. It hinges on two main objectives: improving the resilience of the organization in an uncertain economic environment — especially in Europe — and finding new levers for margin improvements.
From an operations point of view, Capgemini intends to continue leveraging the usual suspects: industrialization, cost cutting, and accelerating the development of its offshore talent pool. It also aiming to optimize its human resource pool via a pyramid management program aimed at, among other things, allocating the right experience level to the right type of work.
More interestingly, the company showcased some of the global offerings it has put together or refined over the past 12 months. Capgemini’s strategic intent is to develop offerings addressing three major client-relevant themes – customer experience, operational processes, and new business models. The offerings will be enabled by a combination of cloud, mobile, analytics, and social technologies. Among the set of offerings managed globally, I found the following of particular interest due to their emerging nature and Capgemini’s interesting approach to developing them: