In 2014, Forrester outlined a new approach to marketing that requires brands to harness customer context to deliver self-perpetuating cycles of real-time, two-way, insight-driven interactions. In 2015, we’ll see more marketers obsess over customers’ context. As more interaction data floods customer databases and marketing automation systems, customer-obsessed marketing leaders will strive to orchestrate brand experiences that drive unprecedented levels of engagement. For example, we predict that:
Digital marketing investments will drive brand experiences across the customer life cycle. By the end of 2015, spend on digital marketing will top $67 billion — growing to 27% of all ad spend. In fact, we believe this will surpass TV spend by 2016; there’s more to the story than ad spend. We believe marketers will branch out of expected digital media buys to stimulate more insight-driven interactions with customers throughout the entire customer life cycle. Supported by new streams of situational customer data and powered by the ability to precisely target audiences with programmatic media buying, marketers will deliver highly engaging brand experiences rather than just feed the top of the funnel.
Dell today announced its new FX system architecture, and I am decidedly impressed.
Dell FX is a 2U flexible infrastructure building block that allows infrastructure architects to compose an application-appropriate server and storage infrastructure out of the following set of resources:
Multiple choices of server nodes, ranging from multi-core Atom to new Xeon E5 V3 servers. With configurations ranging from 2 to 16 server nodes per enclosure, there is pretty much a configuration point for most mainstream applications.
A novel flexible method of mapping disks from up to three optional disk modules, each with 16 drives - the mapping, controlled by the onboard management, allows each server to appear as if the disk is locally attached DASD, so no changes are needed in any software that thinks it is accessing local storage. A very slick evolution in storage provisioning.
A set of I/O aggregators for consolidating Ethernet and FC I/O from the enclosure.
All in all, an attractive and flexible packaging scheme for infrastructure that needs to be tailored to specific combinations of server, storage and network configurations. Probably an ideal platform to support the Nutanix software suite that Dell is reselling as well. My guess is that other system design groups are thinking along these lines, but this is now a pretty unique package, and merits attention from infrastructure architects.
My colleagues Sophia Vargas, Michael Yamnitsky, and I have just published a new Quick Take report, "HP Announces Innovative Tools That Will Bridge Physical And Digital Worlds." Sophia and Michael have written about 3D printing for CIOs previously, and all three of us are interested in how computing and printing technologies can inform the BT Agenda of technology managers.
Fresh off of the announcement that HP will split into two publicly owned companies, one of those new entities -- HP Inc, the personal computing and printing business -- announced its vision for the future with two new products that help users cross the divide between physical and digital. The Multi-Jet Fusion 3D printer represents HP's long-awaited entry into 3D printing, with disruptively improved speed and quality compared to existing market entries. The sprout desktop PC combines a 3D scanner with a touchscreen monitor, touchscreen display mat, and specialized software that allows users to scan real objects, then manipulate them easily in digital format.
In both cases, a video demonstration helps you to really grok what the product is about.
CNET posted a video tour of the Multi-Jet Fusion 3D printer on Youtube:
On October 20 at TechEd, Microsoft quietly slipped in what looks like a potential game-changing announcement in the private/hybrid cloud world when they rolled out Microsoft Cloud Platform System (CPS), an integrated hardware/software system that combines an Azure-consistent on premise cloud with an optimized hardware stack from Dell.
HP was the first US company to create a joint venture subsidiary in China; three decades later, the vendor has become a major player in the country’s consumer and enterprise markets. Among enterprises, HP has strong brand awareness for its server products and services, traditional software solutions, and IT services, but rather less for holistic application life-cycle management (ALM), especially on the mobile side. I think it’s time for technology decision-makers and enterprise architects to seriously consider adopting mobile app delivery management solutions and to evaluate HP for that purpose. Here’s why:
HP’s portfolio now covers the entire mobile app life cycle.The products HP will bring to market as part of its latest strategy will eventually cover the entire mobile application life cycle from app design, development, and optimization to distribution and monitoring. For example, at the design stage, HP Anywhere — based on popular open source product Eclipse — allows developers to write once to multiple devices within its integrated development environment. And its service virtualization feature can help virtualize third-party cloud services and make them consumable across each layer of the system architecture, including web servers, application servers, and web services.
HP’s solution has rich optimization features suitable for Chinese enterprises. At the mobile app optimization stage, HP’s Mobile Center uses a comprehensive approach to functionality, interoperability, usability, performance, and security to consolidate and automate mobile testing. Mobile Center is integrated with LoadRunner, one of the most popular performance engineering tools in Chinese market.
While the timing of the event comes as a surprise, the fact that IBM has decided to unload its technically excellent but unprofitable semiconductor manufacturing operation does not, nor does its choice of Globalfoundries, with whom it has had a longstanding relationship.
One of the developing trends in computing, relevant to both enterprise and service providers alike, is the notion of workload-specific or application-centric computing architectures. These architectures, optimized for specific workloads, promise improved efficiencies for running their targeted workloads, and by extension the services that they support. Earlier this year we covered the basics of this concept in “Optimize Scalable Workload-Specific Infrastructure for Customer Experiences”, and this week HP has announced a pair of server cartridges for their Moonshot system that exemplify this concept, as well as being representative of the next wave of ARM products that will emerge during the remainder of 2014 and into 2015 to tilt once more at the x86 windmill that currently dominates the computing landscape.
Specifically, HP has announced the ProLiant m400 Server Cartridge (m400) and the ProLiant m800 Server Cartridge (m800), both ARM-based servers packaged as cartridges for the HP Moonshot system, which can hold up to 45 of these cartridges in its approximately 4U enclosure. These servers are interesting from two perspectives – that they are both ARM-based products, one being the first tier-1 vendor offering of a 64-bit ARM CPU and that they are both being introduced with a specific workload target in mind for which they have been specifically optimized.
Very much in the shadows of all the press coverage and hysteria attendant on emerging cloud architectures and customer-facing systems of engagement are the nitty-gritty operational details that lurk like monsters in the swamp of legacy infrastructure, and some of them have teeth. And sometimes these teeth can really take a bite out of the posterior of an unprepared organization.
One of those toothy animals that I&O groups are increasingly encountering in their landscapes is the problem of what to do with Windows Server 2003 (WS2003). It turns out there are still approximately 11 million WS2003 systems running today, with another 10+ million instances running as VM guests. Overall, possibly more than 22 million OS images and a ton of hardware that will need replacing and upgrading. And increasing numbers of organizations have finally begun to take seriously the fact that Microsoft is really going to end support and updates as of July 2015.
Based on the conversations I have been having with our clients, the typical I&O group that is now scrambling to come up with a plan has not been willfully negligent, nor are they stupid. Usually WS2003 servers are legacy servers, quietly running some mature piece of code, often in satellite locations or in the shops of acquired companies. The workloads are a mix of ISV and bespoke code, but it is often a LOB-specific application, with the run-of-the-mill collaboration, infrastructure servers and, etc. having long since migrated to newer platforms. A surprising number of clients have told me that they have identified the servers, but not always the applications or the business owners – often a complex task for an old resource in a large company.
[Apologies to all who have just read this post with a sense of deja-vue. I saw a typo, corrected it and then republished the blog, and it reset the publication date. This post was originally published several months ago.]
Having been away from the Linux scene for a while, I recently took a look at a newer version of Linux, SUSE Enterprise Linux Version 11.3, which is representative of the latest feature sets from the Linux 3.0 et seq kernel available to the entre Linux community, including SUSE, Red Hat, Canonical and others. It is apparent, both from the details on SUSE 11.3 and from perusing the documentation on other distribution providers, that Linux has continued to mature nicely as both a foundation for large scale-out clouds as well as a strong contender for the kind of enterprise workloads that previously were only comfortable on either RISC/UNIX systems or large Microsoft Server systems. In effect, Linux has continued its maturation to the point where its feature set and scalability begin to look like a top-tier UNIX from only a couple of years ago.
Among the enterprise technology that caught my eye:
Scalability – The Linux kernel now scales to 4096 x86 CPUs and up to 16 TB of memory, well into high-end UNIX server territory, and will support the largest x86 servers currently shipping.
I/O – The Linux kernel now includes btrfs (a geeky contraction of “Better File System), an open source file system that promises much of the scalability and feature set of Oracle’s popular ZFS file system including checksums, CoW, snapshotting, advanced logical volume management including thin provisioning and others. The latest releases also include advanced features like geoclustering and remote data replication to support advanced HA topologies.
I'm at IDF, a major geekfest for the people interested in the guts of today’s computing infrastructure, and will be immersing myself in the flow for a couple of days. Before going completely off the deep end, I wanted to call out the announcement of the new Xeon E5. While I’ve discussed it in more depth in an accompanying Quick Take just published on our main website, I wanted to add some additional comments on its implications for data center operations, particularly in the areas of capacity planning and long-term capital budgeting.
For many years, each successive iteration of Intel’s and partners’ roadmaps has been quietly delivering a major benefit that seldom gets top billing – additional capacity within the same power and physical footprint, and the resulting ability for users from small enterprises to mega-scale service providers, to defer additional data spending capital expense.