It’s been abundantly clear for a while that in 2017, artificial intelligence (AI) is going to be front and center of vendor marketing as well as enterprise interest. Not that AI is new – it’s been around for decades as a computer science discipline. What’s different now is that advances in technology have made it possible for companies ranging from search engine providers to camera and smartphone manufacturers to deliver AI-enabled products and services, many of which have become an integral part of many people’s daily lives. More than that, those same AI techniques and building blocks are increasingly available for enterprises to leverage in their own products and services without needing to bring on board AI experts, a breed that’s rare and expensive.

Sentient systems capable of true cognition remain a dream for the future.  But AI today can help organizations transform everything from operations to the customer experience. The winners will be those who not only understand the true potential of AI but are also keenly aware of what’s needed to deploy a performant AI-based system that minimizes rather than creates risk and doesn’t result in unflattering headlines.

These are the three key challenges all AI projects must tackle:

  • Underestimating the time and effort it takes to get an AI-powered system up and running. Even if the components are available out of the box, systems still need to be trained and fine-tuned. Depending on the exact use case and requirements for accuracy, it can be anything between a few hours and a couple of years to have a new system up and running. That’s assuming you have a well-curated data set available; if you don’t, that’s another challenge.
  • AI systems are only as good as the people that program them and the data they feed them. It's also people who decide to what degree to rely on the AI system and when to apply human expertise. Ignoring this principle will have unintended, likely negative consequences and could even be the determinant between life and death. These are not idle warnings: We’ve already seen a number of well-publicized cases where training bias ended up discriminating against entire population groups, or image recognition software turned out to be racist; and yes, lives have already been put at risk by badly trained AI programs. Lastly, there’s the law of unintended consequences: people developing AI systems tend to focus on how they want the system to work, but not how somebody with criminal or mischievous intent could subvert it.
  • Ignore legal, regulatory and ethical implications at your peril. For example, you're at risk of breaking the law if the models you run take into consideration factors that mustn't be used as the basis for certain decisions (e.g., race, sex). Or you could find yourself with a compliance breach if you’re under obligation to provide an exact audit trail of how a decision was arrived at, but where neither the software nor its developers can explain how the result came about. A lot of grey areas surround the use of predictions when making decisions about individuals; these require executive level discussions and decisions, as does the thorny issue of dual-use.

Interested in a deeper dive? You can find more detail on all of these points, as well as guidance on how to get your organization’s AI strategy off the ground, in our report Artificial Intelligence: A CIO's Guide To AI's Promises And Perils