It’s been a long wait, about four years if memory serves me well, since Intel introduced the Xeon E7, a high-end server CPU targeted at the highest performance per-socket x86, from high-end two socket servers to 8-socket servers with tons of memory and lots of I/O. In the ensuing four years (an eternity in a world where annual product cycles are considered the norm), subsequent generations of lesser Xeons, most recently culminating in the latest generation 22 nm Xeon E5 V2 Ivy Bridge server CPUs, have somewhat diluted the value proposition of the original E7.
So what is the poor high-end server user with really demanding single-image workloads to do? The answer was to wait for the Xeon E7 V2, and at first glance, it appears that the wait was worth it. High-end CPUs take longer to develop than lower-end products, and in my opinion Intel made the right decision to skip the previous generation 22nm Sandy Bridge architecture and go to Ivy Bridge, it’s architectural successor in the Intel “Tick-Tock” cycle of new process, then new architecture.
What was announced?
The announcement was the formal unveiling of the Xeon E7 V2 CPU, available in multiple performance bins with anywhere from 8 to 15 cores per socket. Critical specifications include:
Up to 15 cores per socket
24 DIMM slots, allowing up to 1.5 TB of memory with 64 GB DIMMs
Approximately 4X I/O bandwidth improvement
New RAS features, including low-level memory controller modes optimized for either high-availability or performance mode (BIOS option), enhanced error recovery and soft-error reporting
Improving the use of data and analytics is a top strategic priority for many companies. But organizations face major challenges ramping up their information management capabilities — in particular due to the combination of a brutal proliferation of new or enhanced technologies, emerging data sources, and difficulty in finding skilled people with the appropriate experience. As a result, companies are increasingly looking to service providers for help.
Please note that we use the term “data services” to refer to broader engagements (including data delivery, analysis, management, or governance-related services), while “data management services” form a smaller subset of services relating to finding, collecting, migrating, and integrating data.
Here are three of the key findings from our research:
More than two-thirds of organizations expect their spending on data management services to increase; 41% stated they expect spending to increase 5% to 10% in the next 12 months.
It looks like the beginning of a new technology hype for artificial intelligence (AI). The media has started flooding the news with product announcements, acquisitions, and investments. The story is how AI is capturing the attention of tech firm and investor giants such as Google, Microsoft, IBM. Add to that the release of the movie ‘Her’, about a man falling for his virtual assistant modeled after Apple’s Siri (think they got the idea from Big Bang Theory when Raj falls in love with Siri), and you know we have begun the journey of geek-dom going mainstream and cool. The buzz words are great too: cognitive computing, deep learning, AI2.
For those who started their careers in AI and left in disillusionment (Andrew Ng confessed to this, yet jumped back in) or data scientists today, the consensus is often that artificial intelligence is just a new fancy marketing term for good old predictive analytics. They point to the reality of Apple’s Siri to listen and respond to requests as adequate but more often frustrating. Or, IBM Watson’s win on Jeopardy as data loading and brute force programming. Their perspective, real value is the pragmatic logic of the predictive analytics we have.
But, is this fair? No.
First, let’s set aside what you heard about financial puts and takes. Don’t try to decipher the geek speak of what new AI is compared to old AI. Let’s talk about what is on the horizon that will impact your business.
New AI breaks the current rule that machines must be better than humans: they must be smarter, faster analysts, or they manufacturing things better and cheaper.
Many of us have spent the past 10 years focusing on business intelligence solutions in order to help our businesses make better fact-based decisions. In fact, BI has been among CIOs’ top 10 priorities for more than a decade. These solutions have, for the most part, been successful — and we continue to improve our BI capabilities as the demand for fact-based decision-making goes deeper, wider, and further into the business.
This whole time, we’ve also been aware of the significant amount of unstructured data that resides within our business, and the fact that we struggle to use it to make better decisions. To begin to get value from this data, we have made our organizations more collaborative and implemented tools and platforms to support that collaboration — with varying degrees of success.
The fact remains that there’s a huge amount of unstructured information and data that we do not get value from. However, a growing number of solutions are beginning to mine elements of this data: product information, software code, legal case files, medical literature, messaging data, and other unstructured business data.
I’ve recently been working with TrustSphere, which is a messaging intelligence provider. TrustSphere has an interesting solution that mines your messaging data to get real insights and information from the mountains of emails and messages that bounce into, out of, and around your organization every day. This is an interesting concept, and TrustSphere has developed a number of use cases for its solution. I’ll be presenting at a webinar hosted by TrustSphere on February 25— feel free to register here.
During 2014, we’ll pass a key milestone: an installed base of 2 billion smartphones globally. Mobile is becoming not only the new digital hub but also the bridge to the physical world. That’s why mobile will affect more than just your digital operations — it will transform your entire business. 2014 will be the year that companies increase investments to transform their businesses, with mobile as a focal point.
Let’s highlight a few of the mobile trends that we predict for 2014:
Competitive advantage in mobile will shift from experience design to big data and analytics. Mobile is transformative but only if you can engage your consumers in their exact moment of need with the right services, content, or information. Not only do you need to understand their context in that moment but you also need insights gleaned from data over time to know how to best serve them in that moment.
Mobile contextual data will offer deep customer insights — beyond mobile. Mobile is a key driver of big data. Most advanced marketers will get that mobile’s value as a marketing tool will be measured by more than just the effectiveness of marketing to people on mobile websites or apps. They will start evaluating mobile’s impact on other channels.
IBM launched on January 9, 2014 its first business unit in 19 years to bring Watson, the machine that beat two Jeopardy champions in 2011, to the rest of us. IBM posits that Watson is the start of a third era in computing that started with manual tabulation, progressed to programmable, and now has become cognitive. Cognitive computing listens, learns, converses, and makes recommendations based on evidence.
IBM is placing big bets and big money, $1 billion, on transforming computer interaction from tabulation and programming to deep engagement. If they succeed, our interaction with technology will truly be personal through interactions and natural conversations that are suggestive, supportive, and as Terry Jones of Kayak explained, "makes you feel good" about the experience.
There are still hurdles for IBM and organizations, such as expense, complexity, information access, coping with ambiguity and context, the supervision of learning, and the implications of suggestions that are unrecognized today. To work, the ecosystem has to be open and communal. Investment is needed beyond the platform for applications and devices to deliver on Watson value. IBM's commitment and leadership are in place. The question is if IBM and its partners can scale Watson to be something more than a complex custom solution to become a truly transformative approach to businesses and our way of life.
Forrester believes that cognitive computing has the potential to address important problems that are unmet with today’s advanced analytics solutions. Though the road ahead is unmapped, IBM has now elevated its commitment to bring cognitive computing to life through this new business unit and the help of one third of its research organization, an ecosystem of partners, and pioneer companies willing to teach their private Watsons.
December 26th at my house was probably a lot like it was at yours: We ate leftovers; we binge-watched shows we’d missed earlier this year; and we played with toys. Not kids’ toys—tech toys. The one we played with most is also the one I spent the most time researching before I bought it: the 3D printer.
Between printing demo pieces and whistles, I checked out my favorite sites to see if any new stories had been posted over the holiday. One of them appears to have implemented a cookie-based content targeting strategy, as both its tech and design sections were packed with headlines about 3D printing. I was pleased to see this attempt at relevance, but it failed in my case. Why? Because it was too one-dimensional.
By just looking at my recent cookies, an automated system could conclude that I’m interested in 3D printing in the abstract. But in fact, I was just trying to learn everything I could in order to make the most informed purchase. If the targeting strategy had taken into consideration the timing of those cookies (I only ever dug into the topic between Thanksgiving and the second week of Dec), my affinity data from Facebook and other social networks, and my long-standing content habits, I would probably have ended up with headlines related to smartphones, tablets, and wearables: things I’m more interested in now that my Christmas shopping is done. 3D printing headlines may have seemed more relevant, but they didn't get a single click from me.
But what are the trends, and what are the best practices?
We are hearing from all the pharma stakeholders four stories that are driving the questions that are being asked of the data:
Pharma needs to get away from its focus on molecules and pivot to a holistic view of disease. As per a senior IT manager at a major pharma in a meeting with me last week: "We have to deliver whole solutions, and not just pills."
Pharma needs to understand prescribing behavior in the formulary and in the physician's office better in order to influence it and thus drive sales. As per a senior marketing manager from a meeting recently: "In the old world, we just sprayed and prayed," meaning that the marketing campaigns aimed at the physician did not discriminate as to who that physician was.
Genomic-based drugs are driving changes though the amounts and types of data that the industry must manage.
I’m sitting on my sofa at home (Yes! Home!) on Sunday morning just before Christmas. I’m “shut down” for the holidays now, but of course, I’m watching Twitter and now listening to my brilliant friends Chris Dancy and Troy DuMoulin discussing CMDB (configuration management database) on the Practitioner Radio podcast. It’s a marvelous episode, covering the topic of CMDB in with impressive clarity! I highly recommend you listen to their conversation. It’s full of beautiful gems of wisdom from two people who have a lot of experience here – and it's pretty entertaining too!
I agree with everything these guys discussed. In particular, I love the part where they cover systems thinking and context as the key to linking everything conceptually. I only have one nit about this podcast, and the greater community discussion about CMDB, though. Let’s stop calling this “thing” a CMDB!
I coauthored a book with the great Carlos Casanova (his real name!) called The CMDB Imperative, but we both hate this CMDB term. This isn’t hypocritical. In fact, we make this point clear in the book. Like the vendors, we used CMDB to hit a nerve. We actually struggled with this decision, but we realized we needed to hit those exposed nerves if we were going to sell any books. Our goal is not to fund a new Aston Martin with book proceeds. If so, we failed miserably! We just wanted to get the word out to as many as possible. I hope we've been able to make even a small difference!
The majority of large organizations have either already shifted away from using BI as just another back-office process and toward competing on BI-enabled information or are in the process of doing so. Businesses can no longer compete just on the cost, margins, or quality of their products and services in an increasingly commoditized global economy. Two kinds of companies will ultimately be more successful, prosperous, and profitable: 1) those with richer, more accurate information about their customers and products than their competitors and 2) those that have the same quality of information as their competitors but get it sooner. Forrester's Forrsights Strategy Spotlight: Business Intelligence And Big Data, Q4 2012 (we are currently fielding a 2014 update, stay tuned for the results) survey showed that enterprises that invest more in BI have higher growth.
The software industry recognized this trend decades ago, resulting in a market swarming with startups that appeared and (very often) found success faster than large vendors could acquire them. The market is still jam-packed and includes multiple dynamics such as (see more details here):
All ERP and software stack vendors offer leading BI platforms
. . . but there's also plenty of room for independent BI vendors
Departmental desktop BI tools aimed at business users are scaling up
Enterprise BI platform vendors are going after self-service use cases.
Cloud offers options to organizations that would rather not deal with BI stack complexity.
Hadoop is breathing new life into open source BI.
The line between BI software and services is blurring