The authors explore the application of modern antidiscrimination law to algorithmic fairness techniques and find incompatibility between those approaches and equal protection jurisprudence that demands “individualized consideration” and bars formal, quantitative weights for race regardless of purpose. The authors look to government-contracting cases as an alternative grounding for algorithmic fairness, because these cases permit explicit and quantitative race-based remedies based on historical discrimination by the actor. But while limited, this doctrinal approach mandates that adjustments be calibrated to the entity’s responsibility for historical discrimination causing present-day disparities. The authors argue that these cases provide a legally viable path for algorithmic fairness under current constitutional doctrine but call for more research at the intersection of algorithmic fairness and causal inference to ensure that bias mitigation is tailored to specific causes and mechanisms of bias.