The author discusses how big data and algorithmic decision-making can compound unfairness. Past injustice can affect data used in AI/machine learning systems in two ways: by undermining the accuracy of the data itself and by producing real differences in the quality an algorithm tries to measure –like creditworthiness. Hellman argues that we must address both of these consequences of injustice, not just the first, in order to achieve algorithmic fairness.
This article examines alternative fairness metrics from conceptual and normative perspectives with particular attention paid to predictive parity and error rate ratios. The article also questions the common view that anti-discrimination law prevents model developers from using race, gender, or other protected characteristics to improve the fairness and accuracy of the algorithms that they design.