Nowadays, there are two topics that I’m very passionate about. The first is the fact that spring is finally here and it’s time to dust off my clubs to take in my few first few rounds of golf. The second topic that I’m currently passionate about is the research I’ve been doing around the connection between big data and big process.
While most enterprise architects are familiar with the promise — and, unfortunately, the hype — of big data, very few are familiar with the newer concept of “big process.” Forrester first coined this term back in August of 2011 to describe the shift we see in organizations moving from siloed approaches to BPM and process improvement to more holistic approaches that stitch all the pieces together to drive business transformation.
Our working definition for big process is:
“Methods and techniques that provide a more holistic approach to process improvement and process transformation initiatives.”
The US government will start tracking hospital readmission rates. Why? Because we spend some $15B each year treating returning patients. Many of these would not need to return if they followed instructions — which involve meds, follow up out patient visits, diet, and you get the picture. To be fair, it's sometimes not the patient's fault. They often do not get a proper discharge summary and in some cases they are just not together enough to comply. They may lack transportation, communication skills, or the ability to follow instructions. Doesn't it make sense to figure out those at-risk patients and do something a little extra? It does. No question. And translates to real money and better care, and this is where big data comes in — and it's nice to see some real use cases that do not involve monitoring our behavior to sell something. Turns out — no surprise here — the structured EMR patient record, if one exists, is full of holes and gaps — including missing treatments from other providers, billing history, or indicators of personal behavior — that may provide a clue to readmission potential. The larger picture of information —mostly unstructured —can now be accessed and analyzed, and high-risk patients can have mini workflows or case management apps to be sure they are following instructions. IBM is doing some great work in this area with the analytics engine Watson and partners such as Seton. Take a few minutes to read this article.
Join us at Forrester’s CIO Forum in Las Vegas on May 3 and 4 for “The New Age Of Business Intelligence.”
The amount of data is growing at tremendous speed — inside and outside of companies’ firewalls. Last year we did hit approximately 1 zettabyte (1 trillion gigabytes) of data in the public Web, and the speed by which new data is created continues to accelerate, including unstructured data in the form of text, semistructured data from M2M communication, and structured data in transactional business applications.
Fortunately, our technical capabilities to collect, store, analyze, and distribute data have also been growing at a tremendous speed. Reports that used to run for many hours now complete within seconds using new solutions like SAP’s HANA or other tailored appliances. Suddenly, a whole new world of data has become available to the CIO and his business peers, and the question is no longer if companies should expand their data/information management footprint and capabilities but rather how and where to start with. Forrester’s recent Strategic Planning Forrsights For CIOs data shows that 42% of all companies are planning an information/data project in 2012, more than for any other application segment — including collaboration tools, CRM, or ERP.
Next up in the 2012 lineup for the Intel E5 refresh cycle of its infrastructure offerings is Cisco, with its announcement last week of what it refers to as its third generation of fabric computing. Cisco announced a combination of tangible improvements to both the servers and the accompanying fabric components, as well as some commitments for additional hardware and a major enhancement of its UCS Manager software immediately and later in 2012. Highlights include:
New servers – No surprise here, Cisco is upgrading its servers to the new Intel CPU offerings, leading with its high-volume B200 blade server and two C-Series rack-mount servers, one a general-purpose platform and the other targeted at storage-intensive requirements. On paper, the basic components of these servers sound similar to competitors – new E5 COUs, faster I/O, and more memory. In addition to the servers announced for March availability, Cisco stated that it would be delivering additional models for ultra-dense computing and mission-critical enterprise workloads later in the year.
Fabric improvements – Because Cisco has a relatively unique architecture, it also focused on upgrades to the UCS fabric in three areas: server, enclosure, and top-level interconnect. The servers now have an optional improved virtual NIC card with support for up to 128 VLANs per adapter and two 20 GB ports per adapter. One in on the motherboard and another can be plugged in as a mezzanine card, giving up to 80 GB bandwidth to each server. The Fabric Interconnect, the component that connects each enclosure to the top-level Fabric Interconnect, has seen its bandwidth doubled to a maximum of 160 GB. The Fabric Interconnect, the top of the UCS management hierarchy and interface to the rest of the enterprise network, has been up graded to a maximum of 96 universal 10Gb ports (divided between downlinks to the blade enclosures and uplinks to the enterprise fabric.
Now, I wasn’t born in Texas, but I got here as soon as I could. I’ve lived in Dallas, TX for 30 years so I consider myself an adopted native-Texan. I’ll be at South-by-Southwest Interactive this weekend, so I thought I’d share some tips for all my current and future friends. For those of you from out-of-state – known as furriners – I hope you’ll find this advice helpful.
It seems everyone’s obsessed with Facebook’s IPO right now. And while CMOs are beginning to understand the possibilities of Facebook, and other social technologies, to connect and engage with customers, many CIOs remain unclear on the value of Facebook.
A question many business executives ask is this: “What’s the value of having someone like your page?”
On its own, maybe not much. But the true potential lies in the ability to collect insights about the people who like brands, products or services – be it your own or someone else’s.
For example, the chart below shows the percentage of consumers by age group who have “liked” Pepsi or Coca-Cola. These data suggest Coca-Cola is significantly more popular with 17-28 year olds than Pepsi, while Pepsi appears more popular with the 36-70 crowd. I pulled these data points directly from the Facebook likes of each of the brand pages using a free consumer tool from MicroStrategy called Wisdom. Using this tool I can even tell that Coca-Cola fans are likely to also enjoy the odd Oreo cookie and bag of Pringles.
Reading the recent Harvard Business Review article from Tom Davenport et al., it occurred to me that next best offer (NBO) is actually a subset of what my colleague Jim Kobielus calls “next best action” (NBA). And when you couple that predictive thinking with advances in process mining (see Wil van der Aalst’s post and the Process Mining Manifesto), it clearly becomes possible to optimize operations dynamically on the fly. First of all, the organization could mine the existing system (the transaction logs of traditional systems or a newly implemented BPM/CRM system) to identify what happens today. This then enables you to identify the outcomes that are most interesting (or those you want to achieve) and then optimize the NBA accordingly.
We take for granted a process definition where the next action is predetermined by the arc of the process definition. But if we can do NBO in 200 milliseconds, we can also do NBA in a similar time frame. Directed arcs in process models and the business rules that go with them start to become a little redundant. This sort of combination (mining and NBA) enables wide-open goal-oriented optimization for all sorts of processes, not just those related to marketing and cross-sell/upsell ideas.
Demands by users of business intelligence (BI) applications to "just get it done" are turning typical BI relationships, such as business/IT alignment and the roles that traditional and next-generation BI technologies play, upside down. As business users demand more control over BI applications, IT is losing its once-exclusive control over BI platforms, tools, and applications. It's no longer business as usual: For example, organizations are supplementing previously unshakable pillars of BI, such as tightly controlled relational databases, with alternative platforms. Forrester recommends that business and IT professionals responsible for BI understand and start embracing some of the latest BI trends — or risk falling behind.
Traditional BI approaches often fall short for the two following reasons (among many others):
BI hasn't fully empowered information workers, who still largely depend on IT
BI platforms, tools and applications aren't agile enough
Emerging ARM server Calxeda has been hinting for some time that they had a significant partnership announcement in the works, and while we didn’t necessarily not believe them, we hear a lot of claims from startups telling us to “stay tuned” for something big. Sometimes they pan out, sometimes they simply go away. But this morning Calxeda surpassed our expectations by unveiling just one major systems partner – but it just happens to be Hewlett Packard, which dominates the WW market for x86 servers.
At its core (unintended but not bad pun), the HP Hyperscale business unit Project Moonshot and Calxeda’s server technology are about improving the efficiency of web and cloud workloads, and promises improvements in excess of 90% in power efficiency and similar improvements in physical density compared with current x86 solutions. As I noted in my first post on ARM servers and other documents, even if these estimates turn out to be exaggerated, there is still a generous window within which to do much, much, better than current technologies. And workloads (such as memcache, Hadoop, static web servers) will be selected for their fit to this new platform, so the workloads that run on these new platforms will potentially come close to the cases quoted by HP and Calxeda.
In the good old days, computer industry trade shows were bigger than life events – booths with barkers and actors, ice cream and espresso bars and games in the booth, magic acts and surging crowds gawking at technology. In recent years, they have for the most part become sad shadows of their former selves. The great SHOWS are gone, replaced with button-down vertical and regional events where you are lucky to get a pen or a miniature candy bar for your troubles.
Enter Oracle OpenWorld. Mix 45,000 people, hundreds of exhibitors, one of the world’s largest software and systems company looking to make an impression, and you have the new generation of technology extravaganza. The scale is extravagant, taking up the entire Moscone Center complex (N, S and W) along with a couple of hotel venues, closing off a block of a major San Francisco street for a week, and throwing a little evening party for 20 or 30 thousand people.
But mixed with the hoopla, which included wheel of fortune giveaways that had hundreds of people snaking around the already crowded exhibition floor in serpentine lines, mini golf and whack-a-mole-games in the exhibit booths along with the aforementioned espresso and ice cream stands, there was genuine content and the public face of some significant trends. So far, after 24 hours, some major messages come through loud and clear: