It seems that almost every week these days, there is a news item about the dangerous potential for artificial intelligence to discriminate against groups of people.  Earlier this week, a presentation at the Artificial Intelligence, Ethics, and Society (AIES) conference on an AI system trained to help the LAPD classify crimes as gang violence had attendees up in arms, with some reportedly storming out.  In this case, like in many of its well-known precursors (Google Photo, Microsoft Tay…), the intention was not to build an unethical model- it was to solve a complex problem with cutting-edge technology.  But as they say, the road to hell is paved with good intentions… and bad data.

My recently published report The Ethics Of AI: How To Avoid Harmful Bias And Discrimination identifies the different ways harmful bias can pollute the machine learning models that act as the brain of most AI systems.  More importantly, it prescribes ways companies can avoid these types of bias- both from a technical perspective and a broader organizational one.  Make no mistake- there is no easy fix.  Machine learning models are inherently discriminators – that is, they identify useful patterns and signals in data to segment a population.  But in a world where GDPR looms large and values-based consumers shift loyalty based on a brand’s ethical scorecard, firms need to make sure these models aren’t discriminating against customers based on gender, race, ethnicity, age, sexual preference, religion, or in similarly harmful ways.

We are at a pivotal moment as a species.  We can either use AI for good or allow it to cement and reinforce past inequity.  If we are lazy, it will do just that.  But if we are thoughtful and vigilant, AI can have a positive impact on all people.  At least, that is my hope.

On a personal note, writing this report is one of the reasons I feel so grateful to be a Forrester analyst.  I got to interview forward-thinking folks at the Institute of Electrical and Electronics Engineers who are working on developing standards to address the issue of algorithmic bias.  I also spoke with Shannon Vallor, a professor at Santa Clara University whose work focuses on the ethical implications of emerging technology and Solon Barocas, a professor at Cornell University who wrote Big Data’s Disparate Impact a seminal paper on algorithmic bias.  These are only a few of the incredibly bright people who were kindly willing to speak with me about harmful bias in AI.

This was not just a passion project for me – it also was for many of my brilliant colleagues who were involved in the ideation, interviews, writing, and editing of the report.  Thank you for challenging my thinking and making this document better.

So, please enjoy the report.  And let’s strive to be FAIR.