Banks In Singapore And Hong Kong Must Step Up To The Cloud Challenge

Fred Giron

Last week I presented an overview of cloud adoption trends in the banking sector in Asia to a panel of financial services regulators in Hong Kong. The presentation showcased a few cloud case studies including CBA, ING Direct, and NAB in Australia. I focused on the business value that these banks have realized through the adoption of cloud concepts, while remaining compliant with the local regulatory environments. These banks have also developed a strong competitive advantage: They know how to do cloud. Ultimately, I believe that cloud is a capability that banks will have to master in order to build an agility advantage. For instance, cloud is a key enabler of Yuebao, Alibaba’s new Internet finance business. 80 million users in less than 10 months? Only cloud architecture can enable that type of agility and scale (an idea that Hong Kong regulators clearly overlooked).

Read more

Your Technology Reflects The State Of Your Customer Experience Ecosystem, So Plan Accordingly

TJ Keitt

The business press has come alive over the past few weeks as companies as diverse as Delta, Facebook, and Tesla have publicly declared that they want to own software development for key applications. What should catch your attention about these announcements is the types of software these firms want to control. Delta is acquiring the software IP and data associated with an application that affects 180 of its customer and flight operations systems. Facebook is building proprietary software to simplify interactions between its sales teams and the advertisers posting ads on the social networking site. And Tesla has developed its own enterprise resource management (ERP) and commerce platform that links the manufacturing history of a vehicle with important sales and customer support systems. Tesla's CIO Jay Vijayan, in describing his organization's system, sums up the sentiment behind many of these business decisions: "It helps the company move really fast."

Read more

HP Hooks Up With Foxcon for Volume Servers

Richard Fichera

Yesterday HP announced that it will be entering into a “non-equity joint venture” (think big strategic contract of some kind with a lot of details still in flight) to address the large-scale web services providers. Under the agreement, Foxcon will design and manufacture and HP will be the primary sales channel for new servers targeted at hyper scale web service providers. The new servers will be branded HP but will not be part of the current ProLiant line of enterprise servers, and HP will deliver additional services along with hardware sales.

Why?

The motivation is simple underneath all the rhetoric. HP has been hard-pressed to make decent margins selling high-volume low-cost and no-frills servers to web service providers, and has been increasingly pressured by low-cost providers. Add to that the issue of customization, which these high-volume customers can easily get from smaller and more agile Asian ODMs and you have a strategic problem. Having worked at HP for four years I can testify to the fact that HP, a company maniacal about quality but encumbered with an effective but rigid set of processes around bringing new products to market, has difficulty rapidly turning around a custom design, and has a cost structure that makes it difficult to profitably compete for deals with margins that are probably in the mid-teens.

Enter the Hon Hai Precision Industry Co, more commonly known as Foxcon. A longtime HP partner and widely acknowledged as one of the most efficient and agile manufacturing companies in the world, Foxcon brings to the table the complementary strengths to match HP – agile design, tightly integrated with its manufacturing capabilities.

Who does what?

Read more

Top 10 Cloud Challenges Facing Media & Entertainment

James Staten

 

With video rapidly becoming the dominant content type on enterprise networks the issues being faced in the media market foreshadow the coming challenges for the rest of the market. And use of the cloud was very much in focus at the 2014 National Association of Broadcasters conference held in Las Vegas in the second week of April.

Most industries need a push to move aggressively into the cloud  -- and the media & entertainment market was no different. The initial push came from the threat of disruption by over the top (OTT) distributors, like NetFlix, who were primarily leveraging the cloud. “[We] aren’t going to be cold-cocked like music was,” said Roy Sekoff, president and co-creator of, HuffPost Live. As a result, video production houses, news organizations and television and motion picture studios are being the most aggressive. Now an upcoming shift to Ultra HD presents a new series of challenges including file sizes, bandwidth limitations, and new complexities for workflows, visual effects and interactivity.  

Here we present ten issues the media industry faces as it more broadly embraces the cloud, as observed first-hand at NABShow 2014. These ten issues show how going cloud changes how you think (planning), act (workflow), and engage (distribute). For Forrester clients there is a new companion report to this blog detailing what the industry is doing to address these challenges and how you can follow suit:

Change how you think: Strategy and planning

Read more

Get Application Optimization Right in the Cloud Era

James Staten

There’s a new and refreshing trend in my conversations with CIOs and IT leaders — acknowledgement that cloud services are here to stay and a desire to proactively start taking advantage. But to get this right takes the right approach to application portfolio optimization. And we’ve just released a new version of our Strategic Rightsourcing tool that helps you do just that.

The decision to proactive embrace cloud services is quickly followed by two questions:

  • How to prepare my IT organization to be cloud-forward?
  • What apps to move to the cloud?
Read more

Can Pricing Actions Make Google’s Cloud Platform Worth A Look?

James Staten

Usually when a product or service shouts about its low pricing, that’s a bad thing but in Google’s case there’s unique value in its Sustained-use Discounts program which just might make it worth your consideration. 

Read more

Cisco UCS at Five Years – Successful Disruption and a New Status-Quo

Richard Fichera

March Madness – Five Years Ago

It was five years ago, March 2009, when Cisco formally announced  “Project California,” its (possibly intentionally) worst-kept secret, as Cisco Unified Computing System. At the time, I was working at Hewlett Packard, and our collective feelings as we realized that Cisco really did intend to challenge us in the server market were a mixed bag. Some of us were amused at their presumption, others were concerned that there might be something there, since we had odd bits and pieces of intelligence about the former Nuova, the Cisco spin-out/spin-in that developed UCS. Most of us were convinced that they would have trouble running a server business at margins we knew would be substantially lower than their margins in their core switch business. Sitting on top of our shiny, still relatively new HP c-Class BladeSystem, which had overtaken IBM’s BladeCenter as the leading blade product, we were collectively unconcerned, as well as puzzled about Cisco’s decision to upset a nice stable arrangement where IBM, HP and Dell sold possibly a Billion dollars’ worth of Cisco gear between them.

Fast Forward

Five years later, HP is still number one in blade server units and revenue, but Cisco appears to be now number two in blades, and closing in on number three world-wide in server sales as well. The numbers are impressive:

·         32,000 net new customers in five years, with 14,000 repeat customers

·         Claimed $2 Billion+ annual run-rate

·         Order growth rate claimed in “mid-30s” range, probably about three times the growth rate of any competing product line.

Lessons Learned

Read more

When Too Much Control Is a Bad Thing

James Staten

I know, more control is an axiom! But the above statement is more often true. When we're talking about configuration control in the public cloud it can be especially true, as control over the configuration of your application can put control in the hands of someone who knows less about the given platform and thus is more likely to get the configuration wrong. Have I fired you up yet? Then you're going to love (or loathe) my latest report, published today. 

Let's look at the facts. Your base configuration of an application deployed to the cloud is likely a single VM in a single availability zone without load balancing, redundancy, DR, or a performance guarantee. That's why you demand configuration control so you can address these shortcomings. But how well do you know the cloud platform you are using? Is it better to use their autoscaling service (if they have one) or to bring your own virtual load balancers? How many instances of your VM, in which zones, is best for availability? Would it be better to configure your own database cluster or use their database as a service solution? One answer probably isn't correct — mirroring the configuration of the application as deployed in your corporate virtualization environment. Starting to see my point?

Fact is, more configuration control may just be a bad thing.

Read more

Singapore SMBs Get A Big Boost To Upgrade Mobile And Digital Platforms

Clement Teo

by Clement Teo with Ng Zhi Ying

The government of Singapore has released its 2014 budget, which includes S$500 million (US$400 million) to help drive economic changes at small and medium-size businesses (SMBs). This spending will focus on:

Read more

Intel Bumps up High-End Servers with New Xeon E7 V2 - A Long Awaited and Timely Leap

Richard Fichera

The long draught at the high-end

It’s been a long wait, about four years if memory serves me well, since Intel introduced the Xeon E7, a high-end server CPU targeted at the highest performance per-socket x86, from high-end two socket servers to 8-socket servers with tons of memory and lots of I/O. In the ensuing four years (an eternity in a world where annual product cycles are considered the norm), subsequent generations of lesser Xeons, most recently culminating in the latest generation 22 nm Xeon E5 V2 Ivy Bridge server CPUs, have somewhat diluted the value proposition of the original E7.

So what is the poor high-end server user with really demanding single-image workloads to do? The answer was to wait for the Xeon E7 V2, and at first glance, it appears that the wait was worth it. High-end CPUs take longer to develop than lower-end products, and in my opinion Intel made the right decision to skip the previous generation 22nm Sandy Bridge architecture and go to Ivy Bridge, it’s architectural successor in the Intel “Tick-Tock” cycle of new process, then new architecture.

What was announced?

The announcement was the formal unveiling of the Xeon E7 V2 CPU, available in multiple performance bins with anywhere from 8 to 15 cores per socket. Critical specifications include:

  • Up to 15 cores per socket
  • 24 DIMM slots, allowing up to 1.5 TB of memory with 64 GB DIMMs
  • Approximately 4X I/O bandwidth improvement
  • New RAS features, including low-level memory controller modes optimized for either high-availability or performance mode (BIOS option), enhanced error recovery and soft-error reporting
Read more