The Empire Strikes Back — But Who’s The Target?

Stefan Ried

Source: Philips http://news6.designinterviews.com/tumblr_kzdg88og1l1qzel9oo1_500.jpg

It was only about a year ago when Larry Ellison was confusing the OpenWorld audience with the “cloud in a box” approach, and only a very few CIOs managed to turn a large Oracle landscape into a real private cloud based on an opex model to their business units. But a lot has changed since last year.

Read more

Silk Browser, The BIG Leap For Amazon’s Fire, Shows Innovative Use Of App Internet

Richard Fichera

My colleague James Staten recently wrote about AutoDesk Cloud as an exemplar of the move toward App Internet, the concept of implementing applications that are distributed between local and cloud resources in a fashion that is transparent to the user except for the improved experience. His analysis is 100% correct, and AutoDesk Cloud represents a major leap in CAD functionality, intelligently offloading the inherently parallel and intensive rendering tasks and facilitating some aspects of collaboration.

But (and there’s always a “but”), having been involved in graphics technology on and off since the '80s, I would say that “cloud” implementation of rendering and analysis is something that has been incrementally evolving for decades, with hundreds of well-documented distributed environments with desktops fluidly shipping their renderings to local rendering and analysis farms that would today be called private clouds, with the results shipped back to the creating workstations. This work was largely developed and paid for either by universities and by media companies as part of major movie production projects. Some of them were of significant scale, such as “Massive,” the rendering and animation farm for "Lord of the Rings" that had approximately 1,500 compute nodes, and a subsequent installation at Weta that may have up to 7,000 nodes. In my, admittedly arguable, opinion, the move to AutoDesk Cloud, while representing a major jump in capabilities by making the cloud accessible to a huge number of users, does not represent a major architectural innovation, but rather an incremental step.

Read more

Intel Developer Forum (IDF) - Cloud. And Cloud, Cloud, Cloud. Oh, Yes, Did I Mention “Cloud”?

Richard Fichera

I just attended IDF and I’ve got to say, Intel has certainly gotten the cloud message. Almost everything is centered on clouds, from the high-concept keynotes to the presentations on low-level infrastructure, although if you dug deep enough there was content for general old-fashioned data center and I&O professionals. Some highlights:

Chips and processors and low-level hardware

Intel is, after all, a semiconductor foundry, and despite their expertise in design, their true core competitive advantage is their foundry operations – even their competitors grudgingly acknowledge that they can manufacture semiconductors better than anyone else on the planet. As a consequence, showing off new designs and processes is always front and center at IDF, and this year was no exception. Last year it was Sandy Bridge, the 22nm shrink of the 32nm Westmere (although Sandy Bridge also incorporated some significant design improvements). This year it was Ivy Bridge, the 22nm “tick” of the Intel “tick-tock” design cycle. Ivy Bridge is the new 22nm architecture and seems to have inherited Intel’s recent focus on power efficiency, with major improvements beyond the already solid advantages of their 22nm process, including deeper P-States and the ability to actually shut down parts of the chip when it is idle. While they did not discuss the server variants in any detail, the desktop versions will get an entirely new integrated graphics processor which they are obviously hoping will blunt AMD’s resurgence in client systems. On the server side, if I were to guess, I would guess more cores and larger caches, along with increased support for virtualization of I/O beyond what they currently have.

Read more

An Early Look at Windows Server 8 – Can You Say Cloud?

Richard Fichera

Well, maybe everybody is saying “cloud” these days, but my first impression of Microsoft Windows Server 8 (not the final name) is that Microsoft has been listening very closely to what customers want from an OS that can support both public and private enterprise cloud implementations. And most importantly, the things that they have built into WS8 for “clouds” also look like they make life easier for plain old enterprise IT.

Microsoft appears to have focused its efforts on several key themes, all of which benefit legacy IT architectures as well as emerging clouds:

  • Management, migration and recovery of VMs in a multi-system domain – Major improvements in Hyper-V and management capabilities mean that I&O groups can easily build multi-system clusters of WS8 servers, and easily migrate VMs across system boundaries. Muplitle systems can be clustered with Fibre Channel, making it easier to implement high-performance clusters.
  • Multi-tenancy – A host of features, primarily around management and role-based delegation that make it easier and more secure to implement multi-tenant VM clouds.
  • Recovery and resiliency – Microsoft claims that they can failover VMs from one machine to another in 25 seconds, a very impressive number indeed. While vendor performance claims are always like EPA mileage – you are guaranteed never to exceed this number – this is an impressive claim and a major capability, with major implications for HA architecture in any data center.
Read more

Xsigo Expands to a Data Center Fabric: Converged Infrastructure for the Virtual Data Center

Richard Fichera

Last year at VMworld I noted Xsigo Systems, a small privately held company at VMworld showing their I/O Director technology, which delivereda subset of HP Virtual Connect or Cisco UCS I/O virtualization capability in a fashion that could be consumed by legacy rack-mount servers from any vendor. I/O Director connects to the server with one or more 10 G Ethernet links, and then splits traffic out into enterprise Ethernet and FC networks. On the server side, the applications, including VMware, see multiple virtual NICs and HBAs courtesy of Xsigo’s proprietary virtual NIC driver.

Controlled via Xsigo’s management console, the server MAC and WWNs can be programmed, and the servers can now connect to multiple external networks with fewer cables and substantially lower costs for NIC and HBA hardware. Virtualized I/O is one of the major transformative developments in emerging data center architecture, and will remain a theme in Forrester’s data center research coverage.

This year at VMworld, Xsigo announced a major expansion of their capabilities – Xsigo Server Fabric, which takes the previous rack-scale single-Xsigo switch domains and links them into a data-center-scale fabric. Combined with improvements in the software and UI, Xsigo now claims to offer one-click connection of any server resource to any network or storage resource within the domain of Xsigo’s fabric. Most significantly, Xsigo’s interface is optimized to allow connection of VMs to storage and network resources, and to allow the creation of private VM-VM links.

Read more

Buyers Scrutinize SaaS Contracts More in H1 2011, As Deal Sizes Grow

Liz Herbert

The growing realization for SaaS buyers is that if they overlook the details of their SaaS contracts, chances are they’ll pay for it later. Forrester analyzed the thousands of inquiries we receive every quarter to understand the hot button topics in the SaaS space for the first half of 2011. When it comes to on-demand services, we found that people paid more attention to the following three factors in the first half of 2011 than ever before: 

  1. Pricing and discounts. It came as no surprise that people are most concerned about money and are looking for guidance around SaaS pricing and discounts more than anything else. Many of our clients want to benchmark themselves against peers. For example, one client asked, “Is there some benchmark data to compare pricing on B2C web portal (PaaS or SaaS) solutions?” Forrester’s take? Unlike traditional software, most SaaS pricing is publicly available on vendor websites. However, pricing and pricing models are still in flux for many emerging areas of SaaS. Even in more established areas, like HR and CRM, discounts can range as high as 85% for large or strategic clients.
Read more

Categories:

May Force.com Not Be With You

Mike Gualtieri

Lack Of Infrastructure Portability Is A Showstopper For Me

Salesforce.com bills Force.com as "The leading cloud platform for business apps." It is definitely not for me, though. The showstopper: infrastructure portability. If I develop an application using the Apex programming language, I can only run in the Force.com "cloud" infrastructure.

Don't Lock Me In

Q: What is worse than being locked-in to a particular operating system?

A: Being locked-in to hardware!

In The Era Of Cloud Computing, Infrastructure Portability (IP) Is A Key Requirement For Application Developers

Unless there is a compelling reason to justify hardware lock-in, make sure you choose a cloud development platform that offers infrastructure portability; otherwise, your app will be like a one-cable-television-company town.

Bottom line: Your intellectual property (IP) should have infrastructure portability (IP).

What Signal Does The Google-Motorola Marriage Send To Product Strategists?

Carlton Doty

It’s a couple of days after Google announced its intentions to jump headfirst into the hardware business. By now everyone — including my colleagues Charles Golvin and John McCarthy — have expressed their thoughts about what this means for Apple, Microsoft, RIM, and all of the Android-based smartphone manufacturers. This is not another one of those blog posts.

What I really want to highlight is something more profound, and more relevant to all of you out there who might classify your day job as “product strategy.” To you, the Google/Moto deal is just one signal — however faint — coming through the static noise of today’s M&As, IPOs, and new product launches. But if you tune in and listen carefully, two things become crystal clear:

  • The lines between entire industries are blurring. Google — and some of the other firms I mentioned above — are just high profile examples of companies that are diversifying their product portfolio, and the very industries in which they play. There are several instances of this over the past "digital decade." What's different now is the increased frequency of the occurrences.
Read more

Cloud Bursting Stimulates New Cloud Business Models

Stefan Ried

An important prerequisite for a full cloud broker model is the technical capability of cloud bursting:

Cloud bursting is the dynamic relocation of workloads from private environments to cloud providers and vice versa. A workload can represent IT infrastructure or end-to-end business processes.

The initial meaning of cloud bursting was relatively simple. Consider this scenario: An enterprise with traditional, non-cloud infrastructure is running out of infrastructure and temporarily gets additional compute power from a cloud service provider. Many enterprises have now established private clouds, and cloud bursting fits even better here, with dynamic workload relocation between private clouds, public clouds, and the more private provider models in the middle; Forrester calls these virtual private clouds. The private cloud is literally bursting into the next cloud level at peak times.

An essential step before leveraging cloud bursting is properly classifying workloads. This involves describing the most public cloud level possible, based on technical restrictions and data privacy needs (including compliance concerns). A conservative enterprise could structure their workloads into three classes of cloud:

  • Productive workloads of back-office data and processes, such as financial applications or customer-related transactions:These need to remain on-premises. An example is the trading system of an investment bank.
Read more

Stop Wasting Money On WebLogic, WebSphere, And JBoss Application Servers

Mike Gualtieri

Use Apache Tomcat. It is free.

I don’t understand why firms spend millions of dollars on Java application servers like Oracle Weblogic or IBM WebSphere Application Server. I get why firms spend money on Red Hat JBoss -- they want to spend less on application servers. But, why spend anything at all? Apache Tomcat will satisfy the deployment requirements of most Java web applications.

Your Java Web Applications Need A Safe, Fast Place To Run

Most Java applications don’t need a fancy container that has umpteen features. Do you want to pay for a car that has windshield wipers on the headlights? (I wish I could afford it.) Most Java applications do not need these luxuriant features or can be designed not to need them. Many firms do, in fact, deploy enterprise-class Java web applications on Apache Tomcat. It works. It is cheap. It can save tons of dough.

Expensive Java Application Servers Sometimes Add Value

There is a need for luxury. But, you probably don’t need it to provide reliable, performant, and scalable Java web applications. Application server vendors will argue that:

  • You need an application container that supports EJBs. EJB3 fixed the original EJB debacle, but why bother? Use Spring, and you don’t need an EJB-compliant container. Many applications don’t even need Spring. EJBs are not needed to create scalable or reliable applications.
Read more