The author discusses how big data and algorithmic decision-making can compound unfairness. Past injustice can affect data used in AI/machine learning systems in two ways: by undermining the accuracy of the data itself and by producing real differences in the quality an algorithm tries to measure –like creditworthiness. Hellman argues that we must address both of these consequences of injustice, not just the first, in order to achieve algorithmic fairness.