Despite initial optimism that AI and machine learning systems could aid various aspects of the response to Covid-19, many did not work as successfully as anticipated. This article highlights potential reasons for underperformance of those systems, particularly those related to data. For example, machine learning tools used for outbreak detection and response to available Covid remedies did not do well in diagnosing Covid from data trained on various datasets or predicting outbreaks. Looking ahead, the authors focus on solving these challenges by merging datasets from multiple sources and clarifying international rules for data sharing.
This article analyzes AI fairness as both essential in itself and as a way to solve the issue of trust in AI systems. The author advocates for an interdisciplinary approach, with computer science and the social sciences working together. Three recommendations are outlined: (1) train managers to act as “devil’s advocates” by evaluating algorithmic decision-making using common sense and intuitive notions of what is right and wrong; (2) require leaders to articulate their companies’ values and moral norms to help inform compromises between utility and human values in AI deployment; (3) hold data scientists and organizational leaders responsible for collaborating to evaluate the fairness of AI models both against technical definitions and broader company values.