This article addresses concerns about the ethical use of AI algorithms given their prominence in so many facets of daily life.The author argues that distinguishing the trustworthiness of claims made about an algorithm from those made by an algorithm can improve how we evaluate individual algorithms or uses and promote ‘intelligent transparency.’ He proposes a four-part framework inspired by pharmaceutical development for evaluating the trustworthiness of algorithms.
This report from an explainable AI competition raises the question whether model developers need to rely on “black box” machine learning techniques or can meet their needs using more interpretable forms of machine learning.