A group of us just published an analysis of VMworld (Breaking Down VMworld), and I thought I’d take this opportunity to add some additional color to the analysis. The report is an excellent synthesis of our analysis, the work of a talented team of collaborators with my two cents thrown in as well, but I wanted to emphasize a few additional impressions, primarily around storage, converged infrastructure, and the overall tone of the show.
First, storage. If they ever need a new name for the show, they might consider “StorageWorld” – it seemed to me that just about every other booth on the show floor was about storage. Cloud storage, flash storage, hybrid storage, cheap storage, smart storage, object storage … you get the picture.[i] Reading about the hyper-growth of storage and the criticality of storage management to the overall operation of a virtualized environment does not drive the concept home in quite the same way as seeing 1000s of show attendees thronging the booths of the storage vendors, large and small, for days on end. Another leading indicator, IMHO, was the “edge of the show” booths, the cheaper booths on the edge of the floor, where smaller startups congregate, which was also well populated with new and small storage vendors – there is certainly no shortage of ambition and vision in the storage technology pipeline for the next few years.
Practice makes perfect. In daily life, if someone has proven experience and a good reputation in specific area for relatively long time, we would normally consider them to be trustworthy. For example, if Amazon Web Services claimed that it was a trusted public cloud service provider — if not the most trusted provider — not many professionals in the US would argue against that.
However, this does not necessarily hold true in China; cloud service providers need to receive an official authorization from the government that certifies them as a provider of trusted cloud services (TRUCS). I recently attended the International Mobile and Internet Conference, where I got an update on TRUCS.
TRUCS is an official recognition of standards compliance and quality. TRUCS is issued by the trusted cloud servicesworking group of the China Academy of Telecommunications Research of the Ministry of Industry and Information Technology. The working group defined the basic principles in June 2013; earlier this year, it finalized the evaluation standards in the form of a cloud service agreement reference framework.
The problems Rick identified in CX ecosystems seem to be the result of ossified organizations, cultures, and business relationships. This means CX leaders must drive new levels of responsiveness and creativity into their ecosystems. And the way you drive these attributes into your ecosystems is to seize on the concept of business agility. My colleague Craig Le Clair outlined 10 dimensions of business agility that provide the market, organizational, and process frameworks necessary to embrace market and operational changes as a matter of routine. This is merely setting the strategy, though; executing it requires a marriage between the business and technology strategies.
This Forum will help you identify brand new software opportunities and run with them. It will hit on the must-have competencies that will empower application development and delivery leaders to execute on their company’s engagement strategies. This includes accelerating development processes, creating digital experiences, reaching mobile customers, and exploiting analytics and big data. Forrester analysts will deliver forward-thinking content while industry specialists – from companies such as McDonald’s, Mastercard, and GE Capital - will provide insight into some real and revolutionary new business approaches that are relevant to you right now.
I’ve recently been thinking a lot about application-specific workloads and architectures (Optimize Scalalable Workload-Specific Infrastructure for Customer Experiences), and it got me to thinking about the extremes of the server spectrum – the very small and the very large as they apply to x86 servers. The range, and the variation in intended workloads is pretty spectacular as we diverge from the mean, which for the enterprise means a 2-socket Xeon server, usually in 1U or 2U form factors.
At the bottom, we find really tiny embedded servers, some with very non-traditional packaging. My favorite is probably the technology from Arnouse digital technology, a small boutique that produces computers primarily for military and industrial ruggedized environments.
Slightly bigger than a credit card, their BioDigital server is a rugged embedded server with up to 8 GB of RAM and 128 GB SSD and a very low power footprint. Based on an Atom-class CPU, thus is clearly not the choice for most workloads, but it is an exemplar of what happens when the workload is in a hostile environment and the computer maybe needs to be part of a man-carried or vehicle-mounted portable tactical or field system. While its creators are testing the waters for acceptance as a compute cluster with up to 4000 of them mounted in a standard rack, it’s likely that these will remain a niche product for applications requiring the intersection of small size, extreme ruggedness and complete x86 compatibility, which includes a wide range of applications from military to portable desktop modules.
The techologist in me (still) loves getting the monthly Web server report from Netcraft.com. Astounding statistics like the number of registered public Web sites (998 million in August, up from 23,000 in 1995) and active Web sites (179 million) put into the context of history shows simply and directly just how deeply the Internet has penetrated our lives over the last 19 years.
Vacation is a good time to read things that you can never get to while working. My list is quite long but I scanned it and took a copy of “The ZERO Marginal Cost Society” by Jeremy Rifkin to the beach. Now Forrester has a lot of focus on digital disruption, helping enterprises avoid being disrupted by new digitally based business models. We write about business agility, how to drive better customer experiences through mobile, social, and cloud. But we pretty much stop at what disruption means to an enterprise, as these are our clients.
Jeremy Rifkin takes the digital disruption concept to its ultimate end state, and projects the effect on the entire economic system. He paints a somewhat murky but thought provoking picture of where this all leads. The basic idea? Digital alternatives, fueled by the Internet of things, big data, the sharing economy, 3D printing, AI and analytics, will drive the marginal cost of producing a product or service to near 0 and this disrupts the entire capitalist system. Established companies can't generate profit, emerging companies can only maintain temporary advantage, and people don’t have “real jobs” anymore. They ride the wave that he calls “the democratization of innovation” that works outside of traditional business and government.
In a previous blog entry, I argued that everyone needs to digitize their business, but not every business knows what to do. Transforming into a digital businesses, especially if you’re a traditional enterprise, is hard work. However, we believe that Asia Pacific is already primed for digital disruption.
In my report, The State Of Digital Business In Asia Pacific In 2014, we found that, while the highest-profile digital business pioneers are headquartered in North America, market demand in Asia Pacific is more conducive to long-term digital disruption. Asia Pacific has five times as many Internet users and smartphone subscribers as the US and almost as much online retail spending as the US and Europe combined. You just need to look at regional powerhouses like Alibaba.com and Commonwealth Bank of Australia and their multibillion-dollar businesses to grasp the rewards of digital business success in Asia Pacific.
However, knowing what these firms have accomplished is insufficient; knowing how to get there is more critical. You should:
Too many wearables today have screens that look like miniaturized smartphones.
Just as smartphones shouldn’t be PC screens shrunk down to a 4-5” screen, smartwatches shouldn’t look like smartphones shrunk to 1”. Nor is it a matter of responsive web design (RWD), which resizes web content to fit the screen.
Samsung's Gear 2 looks like a tiny smartphone screen.
Instead, it’s a different type of design philosophy – one with DNA in the mobile revolution, and then extending mobile thinking even further.
Let’s start with the concept of mobile moments. As my colleagues write in The Mobile Mind Shift, mobile moments are those points in time and space when someone pulls out a mobile device to get what he or she wants immediately, in context. In the case of wearables, the wearer often won’t need to pull out a device – it’s affixed to her wrist, clothing, or eyeglasses. But she might need to lift her wrist, as a visitor to Disney World must do with MagicBand.
Now we’re getting closer to what wearables should be. But there are additional dimensions to wearables that obviate the need for pixel-dense screens:
I am just back from the first ever Cognitive Computing Forum organized by DATAVERSITY in San Jose, California. I am not new to artificial intelligence (AI), and was a software developer in the early days of AI when I was just out of university. Back then, if you worked in AI, you would be called a SW Knowledge Engineer, and you would use symbolic programming (LISP) and first order logic programming (Prolog) or predicate calculus (MRS) to develop “intelligent” programs. Lot’s of research was done on knowledge representation and tools to support knowledge based engineers in developing applications that by nature required heuristic problem solving. Heuristics are necessary when problems are undefined, non-linear and complex. Deciding which financial product you should buy based on your risk tolerance, amount you are willing to invest, and personal objectives is a typical problem we used to solve with AI.
Fast forward 25 years, and AI is back, has a new name, it is now called cognitive computing. An old friend of mine, who’s never left the field, says, “AI has never really gone away, but has undergone some major fundamental changes.” Perhaps it never really went away from labs, research and very nich business areas. The change, however, is heavily about the context: hardware and software scale related constraints are gone, and there’s tons of data/knowledge digitally available (ironically AI missed big data 25 years ago!). But this is not what I want to focus on.