Management consultants and business intelligence, analytics and big data system integrations often use the terms accelerators, blueprints, solutions, frameworks, and products to show off their industry and business domain (sales, marketing, finance, HR, etc) expertise, experience and specialization. Unfortunately, they often use these terms synonymously, while in pragmatic reality meanings vary quite widely. Here’s our pragmatic take on the tangible reality behind the terms (in the increasing order of comprehensiveness):
Fameworks. Often little more than a collection of best practices and lessons learned from multiple client engagements. These can sometimes shave off 5%-10% of a project time/effort mainly by enabling buyers to learn from the mistakes others already made and not repeating them.
Solution Accelerators. Aka Blueprints, these are usually a collection of deliverables, content and other artifacts from prior client engagements. Such artifacts could be in the form of data connectors, transformation logic, data models, metrics, reports and dashboards, but they are often little more than existing deliverables that can be cut/pasted or otherwise leveraged in a new client engagement. Similar to Frameworks, Solution Accelerators often come with a set of best practices. Solution Accelerators can help you hit the ground running and rather than starting from scratch, find yourself 10%-20% into a project.
Solutions. A step above Solution Accelerators, Solutions prepackage artifacts from prior client engagements, by cleansing and stripping them of proprietary content and/or irrelevant info. Count on shaving 20% to 30% off the effort.
So you need some work done that you’ve never had done before or you need to buy something you’ve never bought before. What should you pay? That can be a tough question. What seems reasonable? Sometimes we set arbitrary rules. It’s OK if it’s under $50 or under $100. But that’s just a reassurance that you’re not getting ripped off too badly. Certainly the best way to avoid that outcome is to know how much that service or thing is worth, or at least know what others have paid for the same thing.
Fortunately now, in the age of the customer, that’s easier to find out. Price information for most consumer goods is easier to come by, making the buying process more efficient. But what about governments? We’ve all heard about the $600 toilet seat or the $400 hammer. Stories of government spending excess and mismanagement abound. Some are urban legends or misrepresentations. Others have legs — such as the recent reports of Boeing overcharging the US Army. While these incidents are likely not things of the past, open data initiatives have made significant progress in exposing spending data and improving transparency. Citizens can visit sites such as USAspending.gov for US federal government spending or "Where Does My Money Go?" for details on UK national government spending, and most large cities publish spending as well.
To jump on this R feeding frenzy most leading BI vendors claim that they “integrate with R”, but what does that claim really mean? Our take on this – not all BI/R integration is created equal. When evaluating BI platforms for R integration, Forrester recommends considering the following integration capabilities:
Usually when a product or service shouts about its low pricing, that’s a bad thing but in Google’s case there’s unique value in its Sustained-use Discounts program which just might make it worth your consideration.
A journalist called and asked me today about the market size for wearables. I replied, “That’s not the big story.”
So what is? It's data, and what you can do with it.
First you have to collect the data and have the permission to do so. Most of these relationships are one-to-one. I have these relationships with Nike, Jawbone, Basis, RunKeeper, MyFitnessPal and a few others. I have an app for each on my phone that harvests the data and shows it to me in a way I can understand. Many of these devices have open APIs, so I can import my Fitbit or Jawbone data into MyFitnessPal, for example.
From the story on 9to5mac.com, it is clear that Apple (like with Passbook) is creating a single place for consumers to store a wide range of healthcare and fitness information. From the screenshots they have, it also appears that one can trend this information over time. The phone is capable of collecting some of this information, and is increasingly doing so with less battery burn due to efficiencies in how the sensor data is crunched, so to speak. Wearables – perhaps one from Apple – will collect more information. Other data will certainly come from third-party wearables - such as fitness wearables, patches, bandages, socks and shirt - and attachments, such as the Smartphone Physical. There will always be tradeoffs between the amount of information you collect and the form factor. While I don't want to wear a chubby, clunky device 24x7, it gets better every day.
IBM recently kicked off its big data market planning for 2014 and released a white paper that discusses how analytics create new business value for end user organizations. The major differences compared with last year’s event:
Organizational change. IBM has assigned a new big data practice leader for China, similar to what it’s done for other new technologies including mobile, social, and cloud. IBM can integrate resources from infrastructure (IBM STG), software (IBM SWG), and services (IBM GBS/GTS) teams, although the team members do not report directly to them.
A new analytics platform powered by Watson technology. The Watson Foundation platform has three new functions. It can be deployed on SoftLayer; it extends IBM’s big data analysis capabilities to social, mobile, and cloud; and it offers enterprises the power and ease of use of Watson analysis.
Measurable benefits from customer insights analysis. Chinese organizations have started to buy into the value of analytics and would like to invest in technology tools to optimize customer insights. AmorePacific, a Hong Kong-based skin care and cosmetics company, is using IBM’s SPSS predictive analytics solution to craft tailored messages to its customers and has improved its response rate by more than 30%. It primarily analyzes point-of-sale data, demographic information from its loyalty program, and market data such as property values in the neighborhoods where customers live.
It’s been a long wait, about four years if memory serves me well, since Intel introduced the Xeon E7, a high-end server CPU targeted at the highest performance per-socket x86, from high-end two socket servers to 8-socket servers with tons of memory and lots of I/O. In the ensuing four years (an eternity in a world where annual product cycles are considered the norm), subsequent generations of lesser Xeons, most recently culminating in the latest generation 22 nm Xeon E5 V2 Ivy Bridge server CPUs, have somewhat diluted the value proposition of the original E7.
So what is the poor high-end server user with really demanding single-image workloads to do? The answer was to wait for the Xeon E7 V2, and at first glance, it appears that the wait was worth it. High-end CPUs take longer to develop than lower-end products, and in my opinion Intel made the right decision to skip the previous generation 22nm Sandy Bridge architecture and go to Ivy Bridge, it’s architectural successor in the Intel “Tick-Tock” cycle of new process, then new architecture.
What was announced?
The announcement was the formal unveiling of the Xeon E7 V2 CPU, available in multiple performance bins with anywhere from 8 to 15 cores per socket. Critical specifications include:
Up to 15 cores per socket
24 DIMM slots, allowing up to 1.5 TB of memory with 64 GB DIMMs
Approximately 4X I/O bandwidth improvement
New RAS features, including low-level memory controller modes optimized for either high-availability or performance mode (BIOS option), enhanced error recovery and soft-error reporting
Improving the use of data and analytics is a top strategic priority for many companies. But organizations face major challenges ramping up their information management capabilities — in particular due to the combination of a brutal proliferation of new or enhanced technologies, emerging data sources, and difficulty in finding skilled people with the appropriate experience. As a result, companies are increasingly looking to service providers for help.
Please note that we use the term “data services” to refer to broader engagements (including data delivery, analysis, management, or governance-related services), while “data management services” form a smaller subset of services relating to finding, collecting, migrating, and integrating data.
Here are three of the key findings from our research:
More than two-thirds of organizations expect their spending on data management services to increase; 41% stated they expect spending to increase 5% to 10% in the next 12 months.
It looks like the beginning of a new technology hype for artificial intelligence (AI). The media has started flooding the news with product announcements, acquisitions, and investments. The story is how AI is capturing the attention of tech firm and investor giants such as Google, Microsoft, IBM. Add to that the release of the movie ‘Her’, about a man falling for his virtual assistant modeled after Apple’s Siri (think they got the idea from Big Bang Theory when Raj falls in love with Siri), and you know we have begun the journey of geek-dom going mainstream and cool. The buzz words are great too: cognitive computing, deep learning, AI2.
For those who started their careers in AI and left in disillusionment (Andrew Ng confessed to this, yet jumped back in) or data scientists today, the consensus is often that artificial intelligence is just a new fancy marketing term for good old predictive analytics. They point to the reality of Apple’s Siri to listen and respond to requests as adequate but more often frustrating. Or, IBM Watson’s win on Jeopardy as data loading and brute force programming. Their perspective, real value is the pragmatic logic of the predictive analytics we have.
But, is this fair? No.
First, let’s set aside what you heard about financial puts and takes. Don’t try to decipher the geek speak of what new AI is compared to old AI. Let’s talk about what is on the horizon that will impact your business.
New AI breaks the current rule that machines must be better than humans: they must be smarter, faster analysts, or they manufacturing things better and cheaper.
Many of us have spent the past 10 years focusing on business intelligence solutions in order to help our businesses make better fact-based decisions. In fact, BI has been among CIOs’ top 10 priorities for more than a decade. These solutions have, for the most part, been successful — and we continue to improve our BI capabilities as the demand for fact-based decision-making goes deeper, wider, and further into the business.
This whole time, we’ve also been aware of the significant amount of unstructured data that resides within our business, and the fact that we struggle to use it to make better decisions. To begin to get value from this data, we have made our organizations more collaborative and implemented tools and platforms to support that collaboration — with varying degrees of success.
The fact remains that there’s a huge amount of unstructured information and data that we do not get value from. However, a growing number of solutions are beginning to mine elements of this data: product information, software code, legal case files, medical literature, messaging data, and other unstructured business data.
I’ve recently been working with TrustSphere, which is a messaging intelligence provider. TrustSphere has an interesting solution that mines your messaging data to get real insights and information from the mountains of emails and messages that bounce into, out of, and around your organization every day. This is an interesting concept, and TrustSphere has developed a number of use cases for its solution. I’ll be presenting at a webinar hosted by TrustSphere on February 25— feel free to register here.