Talia B. Gillis

Talia B. Gillis

Related Content For Talia B. Gillis


This paper critiques traditional approaches to fair lending that restrict certain inputs such as the consideration of protected class information (race, gender, etc.) or that require identifying inputs that cause disparities. Based on a simulation of algorithmic lending using mortgage lending data, the author argues that focusing on inputs fails to address core discrimination concerns. She also proposes an alternative fair lending framework to address the needs of algorithmic lenders and to recognize the potential limitations of explaining complex models.
This essay discusses the legal requirements of pricing credit and the architecture of machine learning and intelligent algorithms to provide an overview of legislative gaps, legal solutions, and a framework for testing discrimination that evaluates algorithmic pricing rules. Using real-world mortgage data, the authors find that restricting the data characteristics within the algorithm can increase pricing gaps while having a limited impact on disparity.