Recently, I interviewed Carol Smith about AI and ethics — she’s a senior research scientist at Carnegie Mellon’s Software Engineering Institute, and she told me: “You’re bringing yourself to the projects you do at work, and we’re all biased and flawed. We must accept that building a fancy system doesn’t change that. We’re going to make mistakes, and there will be issues with what we make. The more imaginative we can be early on, the more prepared we can be for failure.”

I asked her about her paper, Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Development, because Forrester has written extensively about AI and ethics in reports and blog posts such as this one and highlighted the importance of diverse teams in reports like “Data-Fueled Products: How To Thrive On The Design And Data Science Collision.” Here are the highlights that stood out to me from Carol’s paper and our conversation:

  • She groups user experience, product design, interaction design, and many other disciplines together as “curiosity experts” for the public sector AI conference the paper was originally intended for. This simplification improves the conversation with AI experts, focusing them on a critical part of a designer’s skill set rather than creating title confusion among those unfamiliar with human-centered design.
  • Her human-machine teaming checklist provides a guide for early conversations about potential problems. Carol advises that team members work through the checklist individually and come together to discuss their concerns.
  • Carol expands on the causes of bias and bad user experiences. She says the roots of these issues are: 1) the limited time spent imagining potential issues and 2) data science and AI education that overlooks potential negative effects on individuals. She highlights “Black Mirror ideation” as a way to imagine potential issues much earlier. Based on the popular TV show, this method asks participants to imagine how the system they’re building could be used maliciously or cause unintended consequences. She cites the article Black Mirror, Light Mirror: Teaching Technology Ethics Through Speculation and a discussion on Twitter as inspiration. Another version of Black Mirror ideation includes the creation of plot points, fictitious quotes, and a movie poster.

Like Forrester, Carol advises designers to remain undeterred in the face of push-back comments from software engineers and data scientists claiming that algorithms are “black boxes.” She points out that the statement is often used as “a way to dismiss questions and hand-wave potential concerns away.” Instead, design teams should persist in asking about where the data is from and how it might be biased. Forrester highlights more ways to have that conversation in our data-fueled products research.

As always, if you’re working on these topics, get in touch with me at ahogan@forrester.com or ahhogan on Linkedin.