IBM's Watson And Its Implications For Smart Computing

Andrew Bartels

Like many connected with IBM as an employee, a customer, or an analyst, I watched IBM's Watson beat two smart humans in three games of Jeopardy.  However, I was able to do so under more privileged conditions than sitting on my couch.  Along with my colleague John Rymer, I attended an IBM event in San Francisco, in which two of the IBM scientists who had developed Watson provided background on Watson prior to, during commercial breaks in, and after the broadcast of the third and final Jeopardy game.  We learned a lot about the time, effort, and approaches that went into making Watson competitive in Jeopardy (including, in answer to John's question, that its code base was a combination of Java and C++).  This background information made clear how impressive Watson is as a milestone in the development of artificial intelligence.  But it also made clear how much work still needs to be done to take the Watson technology and deploy it against the IBM-identified business problems in healthcare, customer service and call centers, or security.

The IBM scientists showed a scattergram of the percentage of Jeopardy questions that winning human contestants got right vs. the percentage of questions that they answered, which showed that these winners generally got 80% or more of the answers right for 60% to 70% of the questions.  They then showed line charts of how Watson did against the same variables over time, with Watson well below this zone at the beginning, but then month by month moving higher and higher, until by the time of the contest it was winning over two-thirds of the test contests against past Jeopardy winners.  But what I noted was how long the training process took before Watson became competitive -- not to mention the amount of computing and human resources IBM put behind the project.

Read more

Don’t Underestimate The Value Of Information, Documentation, And Expertise!

Andre Kindness

With all the articles written about IPv4 addresses running out, Forrester’s phone lines are lit up like a Christmas tree. Clients are asking what they should do, who they should engage, and when they should start embracing IPv6. Like the old adage “It takes a village to raise a child,” Forrester is only one component; therefore, I started to compile a list of vendors and tactical documentation links that would help customers transition to IPv6. As I combed through multiple sites, the knowledge and documentation chasm between vendors became apparent. If the vendor doesn’t understand your business goals or have the knowledge to solve your business issues, are they a good partner? Are acquisition and warranty costs the only or largest considerations to making a change to a new vendor? I would say no.

Support documentation and availability to knowledge is especially critical in networking design, deployment, maintenance, and upgrades. Some pundits have relegated networking to a commodity play, but networking is more than plumbing. It’s the fabric that supports a dynamic business connecting users to services that are relevant to the moment, are aggregated at the point of use, and originate from multiple locations. The complexity has evolved from designing in a few links to tens of hundreds of relationships (security, acceleration, prioritization, etc.) along the flow of apps and data through a network. Virtualization, convergence, consolidation, and the evolving data center networks are prime examples of today’s network complexity. In response to this complexity, architects and practitioners turn to books, training materials, blogs, and repositories so that they can:

  • Set up an infrastructure more quickly or with a minimal number of issues, since there is a design guide or blueprint.
Read more

Want Better Quality? Fire Your QA Team.

Mike Gualtieri

Seriously. I recently spoke with a client who swears that software quality improved once they got rid of the QA team. Instead of making QA responsible for quality, they put the responsibility squarely on the backs of the developers producing the software. This seems to go against conventional wisdom about quality software and developers: Don't trust developers. Or, borrowing from Ronald Reagan, trust but verify.

This client is no slouch, either. The applications provide real-time market data for financial markets, and the client does more than 40 software releases per year. If the market data produced by an application were unavailable or inaccurate, then the financial market it serves would crumble. Availability and accuracy of information are absolute. This app can't go down, and it can't be wrong.

Why Does This Work?

The client said that this works because the developers know that they have 100% responsibility for the application. If it doesn't work, the developers can't say that "QA didn't catch the problem." There is no QA team to blame. The buck stops with the application development team. They better get it right, or heads will roll.

As British author Samuel Johnson famously put it, "The prospect of being hanged focuses the mind wonderfully."

Can This Work For You?

Read more

Watson Beats Jeopardy Champions: How Can You Capitalize On This In Risk And Fraud Management?

Andras Cser

IBM's Watson (natural language processing, deduction, AI, inference and statistical modeling all served by a massively parallel POWER7 array of computers with a total of 2880 processors with 15TB RAM) beat the greatest Jeopardy players in three rounds over the past 3 days — and the matches weren't even close. Watson has shocked us, and now it's time to think: What's in it for the security professional?

The connection is easy to see. The complexity, amount of unstructured background information, and the real-time need to make decisions.

Forrester predicts that the same levels of Watson's sophistication will appear in pattern recognition in fraud management and data protection. If Watson can answer a Jeopardy riddle in real time, it will certainly be able to find patterns of data loss, clustering security incidents, and events, and find root causes of them. Mitigation and/or removal of those root causes will be easy, compared to identifying them . . .

Social Networks: Good Or Evil?

Nigel Fenwick

As we witness truly historic events in the Middle East brought about in part by citizens empowered by social networks, we are also seeing disturbing trends that may yet result in social networks becoming a force for evil. 

A client recently pointed out how timely this sentence was from my recent report on social innovation networks:

“Even state and local government services are not immune as disgruntled citizens quickly assemble and make their voices heard, potentially to the point of toppling unpopular leaders.”

Read more


Tackling Data Leak Prevention At Forrester's Security Forum EMEA 2011

Stephanie Balaouras

For the second year in a row, I have the honor of hosting our Security Forum EMEA in London, March 17th - 18th. This is Forrester's 5th annual Security Forum in Europe, and each year brings a larger, more influential audience and more exciting Forrester and industry keynotes. The theme of this year's event builds on our fall event in Boston - Building The High-Performance Security Organization. It would have been easy to focus the event on one of the myriad of threats and challenges facing security and risk (S&R) professionals today — from the emergence of advanced persistent threats to the security and risk implications of cloud services, social technologies and consumer devices in the workplace — but the real challenge for S&R professionals is not in the specific response to today's threats. It's building the oversight and governance capabilities, repeatable processes, and resilient architectures that deal with today's threats but can also reliably predict, analyze, mitigate, and respond to tomorrow's threats and new business demands. For many of us in security, we are mired in day-to-day operational responsibilities — or as some of us like to call it, the Hamster Wheel Of Hell. 

Read more

SlideShare Brings Another Collaboration Tool To The Consumerization Of IT Party

TJ Keitt

Today, the popular online content-sharing site SlideShare released an audio/video/web conferencing solution called Zipcast. At face value, this is yet another entry into an already crowded web conferencing market. What makes this different is SlideShare is home to the sales and marketing presentations of 45 million users. This makes Zipcast a natural extension of that content store, allowing SlideShare clients to hold inexpensive webinars for prospects. SlideShare's offering is compelling:

  • It has a good set of features. Zipcast provides many of the presentation tools sales and marketing pros expect when hosting a webinar. There's streaming audio and streaming video of the presenter. Slides can be pushed to the attendees and -- in a nice twist that stays true to their roots -- said attendees can advance slides independent of the presenter.
  • It's inexpensively priced.  Zipcast is available to SlideShare Basic (free) and SlideShare Pro customers at no extra cost. Pro customers get added benefits, such as an option to host password-protected meetings and use an audio bridge from Considering Pro licenses start at $19/month, this severely undercuts WebEx and GoToMeeting pricing.
  • It's optimized for the Splinternet. If you've been following the work of my colleague Josh Bernoff, you know that when we refer to the "Splinternet," we're talking about the Internet's fragmentation thanks to mobile devices, social networks and password protection. To deal with this, Zipcast is an HTML5 application that also runs as Flash for browsers not currently supporting that standard. And to allow for quick access to meetings, people can enter through a SlideShare profile or with Facebook Connect.
Read more

Quest Acquires e-DMZ: Get Ready For Consolidation In The PIM Space

Andras Cser

Quest is making aggressive moves to extend into the heterogeneous, non-Microsoft-centric land of identity and access management. After acquiring Voelcker Informatik for provisioning, Quest just announced the acquisition of e-DMZ, an enterprise-class, high-performance PIM appliance vendor. Novell (now Attachmate) acquired host access control specialist Fortefi, Oracle bought Passlogix (vGO-SAM), CA extended Access Control, and IBM integrated Encentuate's eSSO solution with ITIM as a service offering to manage privileged access. The remaining major PIM players like Cyber-Ark, Lieberman, and BeyondTrust will now face added client RFP scrutiny and price pressures from the competition. Forrester expects that new IAM entrants like Symantec/VeriSign,  NetIQ (to compete with arch-rival Quest), or MSSPs will look at acquiring the remaining above vendors.

AMD Bumps Its Specs, Waits For Interlagos And Bulldozer

Richard Fichera

Since its introduction of its Core 2 architecture, Intel reversed much of the damage done to it by AMD in the server space, with attendant publicity. AMD, however, has been quietly reclaiming some ground with its 12-core 6100 series CPUs, showing strength in  benchmarks that emphasize high throughput in process-rich environments as opposed to maximum performance per core. Several AMD-based system products have also been cited by their manufacturers to us as enjoying very strong customer acceptance due to the throughput of the 12-core CPUs combined with their attractive pricing. As a fillip to this success, AMD this past week announced speed bumps for the 6100-series products to give a slight performance boost as they continue to compete with Intel’s Xeon 5600 and 7500 products (Intel’s Sandy Bridge server products have not yet been announced).

But the real news last week was the quiet subtext that the anticipated 16-core Interlagos products based on the new Bulldozer core appear to be on schedule for Q2 ’11 shipments system partners, who should probably be able to ship systems during Q3, and that AMD is still certifying them as compatible with the current sockets used for the 12-core 6000 CPUs. This implies that system partners will be able to quickly deliver products based on the new parts very rapidly.

Actual performance of these systems will obviously be dependent on the workloads being run, but our gut feeling is that while they will not rival the per-core performance of the Intel Xeon 7500 CPUs, for large throughput-oriented environments with high numbers of processes, a description that fits a large number of web and middleware environments, these CPUs, each with up to a 50% performance advantage per core over the current AMD CPUs, may deliver some impressive benchmarks and keep the competition in the server  space at a boil, which in the end is always helpful to customers.

The Nastiest Performance Bottleneck Is Often The Database

Mike Gualtieri

Some of the most joyful technical challenges I experienced as a developer were solving application performance problems. Isn't it fun. You are Sherlock Holmes - examining the architecture, diving into the code for clues, and scouring through logs files to find the bottlenecks that are responsible for snail's pace. However, this job is a lot harder than Sherlock Holmes or CSI. It is more like Dr. Gregory House, because you are racing against the clock. For every minute of sluggish performance, you could be losing eyeballs and therefore revenue. Worst case: the patient, i.e., your website, dies.

Performance Problems Are Usually Elevated Because Of A Crisis

Your business just launched a Super Bowl commercial that confidently directed people to your website - #fail. More likely, a new release of software performs like a dog (with apologies to Greyhounds) because of lame coding and nonexistent performance testing.

 You Need A Clever Solution, Fast

Read more