AI Can Be Both Accurate and Transparent

In 2019, Apple’s credit card business came under fire for offering a woman one twentieth the credit limit offered to her husband. When she complained, Apple representatives reportedly told her, “I don’t know why, but I swear we’re not discriminating. It’s just the algorithm.”

Today, more and more decisions are made by opaque, unexplainable algorithms like this — often with similarly problematic results. From credit approvals to customized product or promotion recommendations to resume readers to fault detection for infrastructure maintenance, organizations across a wide range of industries are investing in automated tools whose decisions are often acted upon with little to no insight into how they are made.

This approach creates real risk. Research has shown that a lack of explainability is both one of executives’ most common concerns related to AI and has a substantial impact on users’ trust in and willingness to use AI products — not to…

Continue Reading →

This article was written by François Candelon and originally published on hbr.org