I had an AI aha moment at a Lexington backyard barbecue last summer. I ran into Carol Rose, the executive director of the Massachusetts ACLU. She told me that she had been asked by Microsoft’s Eric Horvitz to join the board for the Partnership on AI. This partnership was formed in 2016 by Amazon, Google, IBM, Microsoft, and others to focus on the bigger issues raised by AI.

Carol was thinking about joining the board (she eventually did) and wondered aloud to me about what role she could best play. (She wasn’t an AI expert at the time.) Why would this group want an experienced civil rights lawyer on the board? That’s when it struck me:

What if every AI had to be placed on the witness stand and forced to explain itself? 

At that moment, it seemed very simple: If we’re going to trust AIs — teachable algorithmic models — to make decisions for us, then why shouldn’t we demand that the AI be able to explain itself just as a person would have to? Consider these scenarios:

  • An AI prescribes a cancer treatment that consistently fails. Shouldn’t the AI be hauled up in front of the medical board and perhaps a court? Shouldn’t it be sued by the patient’s family?
  • An AI swerves onto the shoulder to avoid being hit by an aggressive driver and kills a highway maintenance worker instead. Shouldn’t the AI be handcuffed and prosecuted for manslaughter?
  • An AI starts placing trades in a stock future to trigger volatility and causes a market collapse. Shouldn’t the SEC interrogate the AI to see if it was violating trading laws?

When you start thinking of AIs this way, the responsibilities of the executives of the hospital, car manufacturer, and trading firm are clear: Until they can trust their AIs as they trust an employee, their AI is not yet ready to use:

  • Until the trading company CEO can explain how the AI ran amok in futures trading, he should keep it off the trading floor.
  • Until the car maker CEO can explain how the AI makes driving decisions in dire circumstances, she should keep it in the lab.
  • Until the hospital president can explain what expertise the AI has in recommending the cancer treatment, he should keep it behind an oncologist. (In fact, this is what hospitals are doing with IBM Watson; the medical industry is far ahead in understanding responsibilities with AI. They call it “transparency.”)

My colleague Brandon Purcell won Forrester’s Research Excellence Award this quarter for this report (paid content) on The Ethics Of AI: How To Avoid Harmful Bias And Discrimination. It is excellent. Please read his post.

Here’s a plot line: Put an AI on the stand and interrogate it so we can see if we believe it and see just how human — or inhuman — it can be.