A Formal Social Marketing Structure Is Key To Success In Asia Pacific

Clement Teo

Discussing with Asia Pacific marketers, I often hear that they struggle to find and recruit the right social marketing skills, including data analysts. While staffing is important insofar as tactics go, having a proper team structure to execute on these tactics is, in my view, even more crucial.

In fact, they can mitigate some of these HR challenges with a properly structured social team. My report on building a usable social team structure addresses how organizational models will evolve as social marketing matures. These models include the a) Hub, b) Hub and spoke and c) distributed hub and spoke. 

The Hub, for example, is meant to help firms that are starting out on social marketing. This could be a firm that is beginning to get more serious about how social is used strategically to drive business outcomes, or one that operates in highly regulated industries like banking and finance. The centralized hub model puts all of the responsibility (and money) for social marketing in the hands of one small team. This model provides training wheels for marketers for social marketing — especially in learning how to coordinate or test social marketing campaigns in the early phases of social maturity. A centralized hub acts as an incubator for social marketing experimentation and allows other teams to focus on their own objectives until the social program can be implemented at scale with minimal risk. Execution can be in-house, but some marketers partner with an external agency for additional dedicated resources.

Read more

Facebook and HP Show Different Visions for Web-scale

Richard Fichera

Recently we’ve had a chance to look again at two very conflicting views from HP and Facebook on how to do web-scale and cloud computing, both announced at the recent OCP annual event in California.

From HP come its new CloudLine systems, the public face of their joint venture with Foxcon. Early details released by HP show a line of cost-optimized servers descended from a conventional engineering lineage and incorporating selected bits of OCP technology to reduce costs. These are minimalist rack servers designed, after stripping away all the announcement verbiage, to compete with white-box vendors such as Quanta, SuperMicro and a host of others. Available in five models ranging from the minimally-featured CL1100 up through larger nodes designed for high I/O, big data and compute-intensive workloads, these systems will allow large installations to install capacity at costs ranging from 10 – 25% less than the equivalent capacity in their standard ProLiant product line. While the strategic implications of HP having to share IP and market presence with Foxcon are still unclear, it is a measure of HP’s adaptability that they were willing to execute on this arrangement to protect against inroads from emerging competition in the most rapidly growing segment of the server market, and one where they have probably been under immense margin pressure.

Read more

Intel Announces Xeon SOC – Seriously Raising the Bar for AMD and ARM Competition

Richard Fichera

Intel has made no secret of its development of the Xeon D, an SOC product designed to take Xeon processing close to power levels and product niches currently occupied by its lower-power and lower performance Atom line, and where emerging competition from ARM is more viable.

The new Xeon D-1500 is clear evidence that Intel “gets it” as far as platforms for hyperscale computing and other throughput per Watt and density-sensitive workloads, both in the enterprise and in the cloud are concerned. The D1500 breaks new ground in several areas:

It is the first Xeon SOC, combining 4 or 8 Xeon cores with embedded I/O including SATA, PCIe and multiple 10 nd 1 Gb Ethernet ports.

(Source: Intel)

It is the first of Intel’s 14 nm server chips expected to be introduced this year. This expected process shrink will also deliver a further performance and performance per Watt across the entire line of entry through mid-range server parts this year.

Why is this significant?

With the D-1500, Intel effectively draws a very deep line in the sand for emerging ARM technology as well as for AMD. The D1500, with 20W – 45W power, delivers the lower end of Xeon performance at power and density levels previously associated with Atom, and close enough to what is expected from the newer generation of higher performance ARM chips to once again call into question the viability of ARM on a pure performance and efficiency basis. While ARM implementations with embedded accelerators such as DSPs may still be attractive in selected workloads, the availability of a mainstream x86 option at these power levels may blunt the pace of ARM design wins both for general-purpose servers as well as embedded designs, notably for storage systems.

Read more

What Can We Expect At Mobile World Congress 2015?

Thomas Husson

I remember the first time I attended 3GSM in Cannes: It was primarily a B2B telecoms trade show and centered on DVB-H, WiMAX, and other technology-centric acronyms. Fast-forward 11 years, and Mobile World Congress (MWC) will be the center of the business world for a couple of days (March 2 to 5). Some things don’t change: We will continue to hear too much about technology. Simply ignore the hype, especially around 5G; it will have no impact at all on your marketing strategy for the next five years!

However, the list of keynote speakers is a good indication of what MWC has become: a priority event for leaders willing to transform their businesses. The CEOs of Facebook, Renault-Nissan, SAP, MasterCard, and BBVA will be speaking, and more than 4,500 CEOs will be among the 85,000 attendees (only 25% of which are from operators). It is fascinating to see how mobile has changed the world in the past 10 years — not just in the way that we live and communicate but also in terms of disrupting every business. I strongly believe that mobile will have a bigger impact than the PC or Web revolutions. Why?

First, mobile is the fastest and most ubiquitous technology ever to spread globally. People in Asia and Africa are skipping the PC Internet and going direct to mobile phones; they’re the ultimate convergent device and often the only way to reach people in rural areas. As Andreessen Horowitz's Benedict Evans put it, mobile is “eating the world”. It has already cannibalized several markets, such as cameras, video recorders, and GPS, and is now disrupting entire industries, changing the game for payments, health, and education, especially in emerging countries. Second, mobile is the bridge to the physical world. It is not just another “subdigital” channel. This alone has a huge impact on business models. Last, mobile is a catalyst for business transformation.

Read more

Rethinking Analytics Infrastructure

Richard Fichera

Last year I published a reasonably well-received research document on Hadoop infrastructure, “Building the Foundations for Customer Insight: Hadoop Infrastructure Architecture”. Now, less than a year later it’s looking obsolete, not so much because it was wrong for traditional (and yes, it does seem funny to use a word like “traditional” to describe a technology that itself is still rapidly evolving and only in mainstream use for a handful of years) Hadoop, but because the universe of analytics technology and tools has been evolving at light-speed.

If your analytics are anchored by Hadoop and its underlying map reduce processing, then the mainstream architecture described in the document, that of clusters of servers each with their own compute and storage, may still be appropriate. On the other hand, if, like many enterprises, you are adding additional analysis tools such as NoSQL databases, SQL on Hadoop (Impala, Stinger, Vertica) and particularly Spark, an in-memory-based analytics technology that is well suited for real-time and streaming data, it may be necessary to begin reassessing the supporting infrastructure in order to build something that can continue to support Hadoop as well as cater to the differing access patterns of other tools sets. This need to rethink the underlying analytics plumbing was brought home by a recent demonstration by HP of a reference architecture for analytics, publicly referred to as the HP Big Data Reference Architecture.

Read more

Shifting Sands – Changing Alliances Underscore the Dynamism of the Infrastructure Systems Market

Richard Fichera

There is always a tendency to regard the major players in large markets as being a static background against which the froth of smaller companies and the rapid dance of customer innovation plays out. But if we turn our lens toward the major server vendors (who are now also storage and networking as well as software vendors), we see that the relatively flat industry revenues hide almost continuous churn. Turn back the clock slightly more than five years ago, and the market was dominated by three vendors, HP, Dell and IBM. In slightly more than five years, IBM has divested itself of highest velocity portion of its server business, Dell is no longer a public company, Lenovo is now a major player in servers, Cisco has come out of nowhere to mount a serious challenge in the x86 server segment, and HP has announced that it intends to split itself into two companies.

And it hasn’t stopped. Two recent events, the fracturing of the VCE consortium and the formerly unthinkable hook-up of IBM and Cisco illustrate the urgency with which existing players are seeking differential advantage, and reinforce our contention that the whole segment of converged and integrated infrastructure remains one of the active and profitable segments of the industry.

EMC’s recent acquisition of Cisco’s interest in VCE effectively acknowledged what most customers have been telling us for a long time – that VCE had become essentially an EMC-driven sales vehicle to sell storage, supported by VMware (owned by EMC) and Cisco as a systems platform. EMC’s purchase of Cisco’s interest also tacitly acknowledges two underlying tensions in the converged infrastructure space:

Read more

Global Vendors Should Expand Their Ecosystem In China

Frank Liu

Back in June, I blogged about why Chinese technology management professionals have started looking more closely at domestic vendors. One reason: a government-led push away from foreign IT vendors that is forcing global vendors to expand their local ecosystem to exploit new service models and improve service delivery. Chinese tech management teams should keep an eye on new trends and be aware of the benefits they bring.

I recently attended VMware’s vForum 2014 event in Beijing. The vendor has established a local ecosystem for the three pillars of its business: the software-defined data center (SDDC), cloud services, and end user computing. VMware is working with:

  • Huawei to refine SDDC technologies.VMware is leveraging Huawei’s technology capability to improve its product feature. VMware integrated Huawei Agile Controller into NSX and vCenter to operate and manage network automation and quickly migrate virtual machines online. Huawei provides the technology to unify the management of virtual and physical networks based on VMware’s virtualization platform. This partnership can help VMware optimize its existing software features and improve the customer experience.
Read more

Microsoft And Dell Change The Private/Hybrid Cloud Game With On-Premise Azure

Richard Fichera

What was announced?

On October 20 at TechEd, Microsoft quietly slipped in what looks like a potential game-changing announcement in the private/hybrid cloud world when they rolled out Microsoft Cloud Platform System (CPS), an integrated hardware/software system that combines an Azure-consistent on premise cloud with an optimized hardware stack from Dell.

Why does it matter?

Read more

IBM Sheds Yet Another Hardware Business - Pays To Get Rid Of Semiconductor Fabrication

Richard Fichera
While the timing of the event comes as a surprise, the fact that IBM has decided to unload its technically excellent but unprofitable semiconductor manufacturing operation does not, nor does its choice of Globalfoundries, with whom it has had a longstanding relationship.
 
Read more

Windows Server 2003 – A Very Unglamorous But Really Important Problem, Waiting To Bite

Richard Fichera

Very much in the shadows of all the press coverage and hysteria attendant on emerging cloud architectures and customer-facing systems of engagement are the nitty-gritty operational details that lurk like monsters in the swamp of legacy infrastructure, and some of them have teeth. And sometimes these teeth can really take a bite out of the posterior of an unprepared organization.

One of those toothy animals that I&O groups are increasingly encountering in their landscapes is the problem of what to do with Windows Server 2003 (WS2003). It turns out there are still approximately 11 million WS2003 systems running today, with another 10+ million instances running as VM guests. Overall, possibly more than 22 million OS images and a ton of hardware that will need replacing and upgrading. And increasing numbers of organizations have finally begun to take seriously the fact that Microsoft is really going to end support and updates as of July 2015.

Based on the conversations I have been having with our clients, the typical I&O group that is now scrambling to come up with a plan has not been willfully negligent, nor are they stupid. Usually WS2003 servers are legacy servers, quietly running some mature piece of code, often in satellite locations or in the shops of acquired companies. The workloads are a mix of ISV and bespoke code, but it is often a LOB-specific application, with the run-of-the-mill collaboration, infrastructure servers and, etc. having long since migrated to newer platforms. A surprising number of clients have told me that they have identified the servers, but not always the applications or the business owners – often a complex task for an old resource in a large company.

Read more