This article examines alternative fairness metrics from conceptual and normative perspectives with particular attention paid to predictive parity and error rate ratios. The article also questions the common view that anti-discrimination law prevents model developers from using race, gender, or other protected characteristics to improve the fairness and accuracy of the algorithms that they design.