A journalist called and asked me today about the market size for wearables. I replied, “That’s not the big story.”
So what is? It's data, and what you can do with it.
First you have to collect the data and have the permission to do so. Most of these relationships are one-to-one. I have these relationships with Nike, Jawbone, Basis, RunKeeper, MyFitnessPal and a few others. I have an app for each on my phone that harvests the data and shows it to me in a way I can understand. Many of these devices have open APIs, so I can import my Fitbit or Jawbone data into MyFitnessPal, for example.
From the story on 9to5mac.com, it is clear that Apple (like with Passbook) is creating a single place for consumers to store a wide range of healthcare and fitness information. From the screenshots they have, it also appears that one can trend this information over time. The phone is capable of collecting some of this information, and is increasingly doing so with less battery burn due to efficiencies in how the sensor data is crunched, so to speak. Wearables – perhaps one from Apple – will collect more information. Other data will certainly come from third-party wearables - such as fitness wearables, patches, bandages, socks and shirt - and attachments, such as the Smartphone Physical. There will always be tradeoffs between the amount of information you collect and the form factor. While I don't want to wear a chubby, clunky device 24x7, it gets better every day.
IBM recently kicked off its big data market planning for 2014 and released a white paper that discusses how analytics create new business value for end user organizations. The major differences compared with last year’s event:
Organizational change. IBM has assigned a new big data practice leader for China, similar to what it’s done for other new technologies including mobile, social, and cloud. IBM can integrate resources from infrastructure (IBM STG), software (IBM SWG), and services (IBM GBS/GTS) teams, although the team members do not report directly to them.
A new analytics platform powered by Watson technology. The Watson Foundation platform has three new functions. It can be deployed on SoftLayer; it extends IBM’s big data analysis capabilities to social, mobile, and cloud; and it offers enterprises the power and ease of use of Watson analysis.
Measurable benefits from customer insights analysis. Chinese organizations have started to buy into the value of analytics and would like to invest in technology tools to optimize customer insights. AmorePacific, a Hong Kong-based skin care and cosmetics company, is using IBM’s SPSS predictive analytics solution to craft tailored messages to its customers and has improved its response rate by more than 30%. It primarily analyzes point-of-sale data, demographic information from its loyalty program, and market data such as property values in the neighborhoods where customers live.
It’s been a long wait, about four years if memory serves me well, since Intel introduced the Xeon E7, a high-end server CPU targeted at the highest performance per-socket x86, from high-end two socket servers to 8-socket servers with tons of memory and lots of I/O. In the ensuing four years (an eternity in a world where annual product cycles are considered the norm), subsequent generations of lesser Xeons, most recently culminating in the latest generation 22 nm Xeon E5 V2 Ivy Bridge server CPUs, have somewhat diluted the value proposition of the original E7.
So what is the poor high-end server user with really demanding single-image workloads to do? The answer was to wait for the Xeon E7 V2, and at first glance, it appears that the wait was worth it. High-end CPUs take longer to develop than lower-end products, and in my opinion Intel made the right decision to skip the previous generation 22nm Sandy Bridge architecture and go to Ivy Bridge, it’s architectural successor in the Intel “Tick-Tock” cycle of new process, then new architecture.
What was announced?
The announcement was the formal unveiling of the Xeon E7 V2 CPU, available in multiple performance bins with anywhere from 8 to 15 cores per socket. Critical specifications include:
Up to 15 cores per socket
24 DIMM slots, allowing up to 1.5 TB of memory with 64 GB DIMMs
Approximately 4X I/O bandwidth improvement
New RAS features, including low-level memory controller modes optimized for either high-availability or performance mode (BIOS option), enhanced error recovery and soft-error reporting
Improving the use of data and analytics is a top strategic priority for many companies. But organizations face major challenges ramping up their information management capabilities — in particular due to the combination of a brutal proliferation of new or enhanced technologies, emerging data sources, and difficulty in finding skilled people with the appropriate experience. As a result, companies are increasingly looking to service providers for help.
Please note that we use the term “data services” to refer to broader engagements (including data delivery, analysis, management, or governance-related services), while “data management services” form a smaller subset of services relating to finding, collecting, migrating, and integrating data.
Here are three of the key findings from our research:
More than two-thirds of organizations expect their spending on data management services to increase; 41% stated they expect spending to increase 5% to 10% in the next 12 months.
It looks like the beginning of a new technology hype for artificial intelligence (AI). The media has started flooding the news with product announcements, acquisitions, and investments. The story is how AI is capturing the attention of tech firm and investor giants such as Google, Microsoft, IBM. Add to that the release of the movie ‘Her’, about a man falling for his virtual assistant modeled after Apple’s Siri (think they got the idea from Big Bang Theory when Raj falls in love with Siri), and you know we have begun the journey of geek-dom going mainstream and cool. The buzz words are great too: cognitive computing, deep learning, AI2.
For those who started their careers in AI and left in disillusionment (Andrew Ng confessed to this, yet jumped back in) or data scientists today, the consensus is often that artificial intelligence is just a new fancy marketing term for good old predictive analytics. They point to the reality of Apple’s Siri to listen and respond to requests as adequate but more often frustrating. Or, IBM Watson’s win on Jeopardy as data loading and brute force programming. Their perspective, real value is the pragmatic logic of the predictive analytics we have.
But, is this fair? No.
First, let’s set aside what you heard about financial puts and takes. Don’t try to decipher the geek speak of what new AI is compared to old AI. Let’s talk about what is on the horizon that will impact your business.
New AI breaks the current rule that machines must be better than humans: they must be smarter, faster analysts, or they manufacturing things better and cheaper.
Many of us have spent the past 10 years focusing on business intelligence solutions in order to help our businesses make better fact-based decisions. In fact, BI has been among CIOs’ top 10 priorities for more than a decade. These solutions have, for the most part, been successful — and we continue to improve our BI capabilities as the demand for fact-based decision-making goes deeper, wider, and further into the business.
This whole time, we’ve also been aware of the significant amount of unstructured data that resides within our business, and the fact that we struggle to use it to make better decisions. To begin to get value from this data, we have made our organizations more collaborative and implemented tools and platforms to support that collaboration — with varying degrees of success.
The fact remains that there’s a huge amount of unstructured information and data that we do not get value from. However, a growing number of solutions are beginning to mine elements of this data: product information, software code, legal case files, medical literature, messaging data, and other unstructured business data.
I’ve recently been working with TrustSphere, which is a messaging intelligence provider. TrustSphere has an interesting solution that mines your messaging data to get real insights and information from the mountains of emails and messages that bounce into, out of, and around your organization every day. This is an interesting concept, and TrustSphere has developed a number of use cases for its solution. I’ll be presenting at a webinar hosted by TrustSphere on February 25— feel free to register here.
During 2014, we’ll pass a key milestone: an installed base of 2 billion smartphones globally. Mobile is becoming not only the new digital hub but also the bridge to the physical world. That’s why mobile will affect more than just your digital operations — it will transform your entire business. 2014 will be the year that companies increase investments to transform their businesses, with mobile as a focal point.
Let’s highlight a few of the mobile trends that we predict for 2014:
Competitive advantage in mobile will shift from experience design to big data and analytics. Mobile is transformative but only if you can engage your consumers in their exact moment of need with the right services, content, or information. Not only do you need to understand their context in that moment but you also need insights gleaned from data over time to know how to best serve them in that moment.
Mobile contextual data will offer deep customer insights — beyond mobile. Mobile is a key driver of big data. Most advanced marketers will get that mobile’s value as a marketing tool will be measured by more than just the effectiveness of marketing to people on mobile websites or apps. They will start evaluating mobile’s impact on other channels.
IBM launched on January 9, 2014 its first business unit in 19 years to bring Watson, the machine that beat two Jeopardy champions in 2011, to the rest of us. IBM posits that Watson is the start of a third era in computing that started with manual tabulation, progressed to programmable, and now has become cognitive. Cognitive computing listens, learns, converses, and makes recommendations based on evidence.
IBM is placing big bets and big money, $1 billion, on transforming computer interaction from tabulation and programming to deep engagement. If they succeed, our interaction with technology will truly be personal through interactions and natural conversations that are suggestive, supportive, and as Terry Jones of Kayak explained, "makes you feel good" about the experience.
There are still hurdles for IBM and organizations, such as expense, complexity, information access, coping with ambiguity and context, the supervision of learning, and the implications of suggestions that are unrecognized today. To work, the ecosystem has to be open and communal. Investment is needed beyond the platform for applications and devices to deliver on Watson value. IBM's commitment and leadership are in place. The question is if IBM and its partners can scale Watson to be something more than a complex custom solution to become a truly transformative approach to businesses and our way of life.
Forrester believes that cognitive computing has the potential to address important problems that are unmet with today’s advanced analytics solutions. Though the road ahead is unmapped, IBM has now elevated its commitment to bring cognitive computing to life through this new business unit and the help of one third of its research organization, an ecosystem of partners, and pioneer companies willing to teach their private Watsons.
But what are the trends, and what are the best practices?
We are hearing from all the pharma stakeholders four stories that are driving the questions that are being asked of the data:
Pharma needs to get away from its focus on molecules and pivot to a holistic view of disease. As per a senior IT manager at a major pharma in a meeting with me last week: "We have to deliver whole solutions, and not just pills."
Pharma needs to understand prescribing behavior in the formulary and in the physician's office better in order to influence it and thus drive sales. As per a senior marketing manager from a meeting recently: "In the old world, we just sprayed and prayed," meaning that the marketing campaigns aimed at the physician did not discriminate as to who that physician was.
Genomic-based drugs are driving changes though the amounts and types of data that the industry must manage.
I’m sitting on my sofa at home (Yes! Home!) on Sunday morning just before Christmas. I’m “shut down” for the holidays now, but of course, I’m watching Twitter and now listening to my brilliant friends Chris Dancy and Troy DuMoulin discussing CMDB (configuration management database) on the Practitioner Radio podcast. It’s a marvelous episode, covering the topic of CMDB in with impressive clarity! I highly recommend you listen to their conversation. It’s full of beautiful gems of wisdom from two people who have a lot of experience here – and it's pretty entertaining too!
I agree with everything these guys discussed. In particular, I love the part where they cover systems thinking and context as the key to linking everything conceptually. I only have one nit about this podcast, and the greater community discussion about CMDB, though. Let’s stop calling this “thing” a CMDB!
I coauthored a book with the great Carlos Casanova (his real name!) called The CMDB Imperative, but we both hate this CMDB term. This isn’t hypocritical. In fact, we make this point clear in the book. Like the vendors, we used CMDB to hit a nerve. We actually struggled with this decision, but we realized we needed to hit those exposed nerves if we were going to sell any books. Our goal is not to fund a new Aston Martin with book proceeds. If so, we failed miserably! We just wanted to get the word out to as many as possible. I hope we've been able to make even a small difference!