Recommended Reads

Latest Recommended Reads

This article identifies and explores a gap between commonly used statistical measures of fairness and rulings and evidentiary standards of the European Court of Justice. The authors suggest that current standards to bring discrimination claims limit the potential for a standardized system of addressing algorithmic discrimination in the EU because they are too contextual and open to interpretation. Additionally, the authors argue that law provides little guidance on addressing cases when algorithms, not humans, are the discriminators. The authors propose conditional demographic disparity as an appropriate statistical measure of fairness to harmonize legal and industry perspectives.
This article reports on an investigation by a team of journalists that found that applicants of color were significantly more likely than White applicants to be denied home loans based on nationwide 2019 data. In an analysis of more than 2 million loan applications, this disparity ranged from 40% more likely for Latino applicants to 80% more likely for Black applicants, despite comparable metrics and credit scores. Given these findings, the article questions the use of traditional credit scoring models and automated underwriting systems.
This article focuses on the role of open-source software (OSS) in the adoption and use of AI and machine learning and argues that this critical infrastructure is not subject to adequate oversight. The significance of OSS is clear: it helps speed AI adoption, reduce AI bias through means such as open-source explainable AI tools, and can improve tech sector competitiveness. However, OSS tools also carry risks, namely reducing competitiveness and giving a small number of technology companies an outsized role in determining AI standards. This paper contends that increasing oversight of OSS tools is a critical step in emerging efforts to define fair and responsible use of AI.
This paper surveys existing and emerging frameworks for AI governance and points to common emphases with respect to reliability, transparency, accountability, and fairness. The authors argue that use of AI has intensified concerns about fairness and call for development of more specific and comprehensive standards by national, regional, and global standard-setting bodies to define fairness and to clarify the role of human intervention in the development and use of AI models in the financial system.
This report incorporates analysis of existing literature and interviews with experts and various stakeholders to determine how automated systems can best support and include traditionally marginalized populations. The report focuses on the problem of algorithmic bias embedded in data and systems. The report proposes a “Digital Bill of Rights” that articulates seven core rights designed to ensure that systems meet expectations for fairness, accountability, and transparency.
This publication considers common types of biases in AI systems that can lead to public distrust in applications across all sectors of the economy and proposes a three-stage framework for reducing such biases. The National Institute of Standards and Technology intentionally focuses on how AI systems are designed, developed, and used and the societal context in which these systems operate rather than specific solutions for bias. As a result, its framework proposes to enable users of AI systems to identify and mitigate bias more effectively through engagement across diverse disciplines and stakeholders, including those most directly affected by biased models. This proposal represents a step by NIST towards the development of standards for trustworthy and responsible AI. NIST is accepting comments on this framework until August 5, 2021.
This paper finds that mortgage refinancing benefits from lower interest rates during the pandemic have not been shared equally among racial and ethnic groups. Based on a sample of 5 million mortgages, the authors estimate that only 6% of Black borrowers and 9% of Hispanic borrowers refinanced between January and October 2020, compared with almost 12% of White borrowers. Among those who experienced distress during the peak months of May and June 2020, the percent who were still behind on their mortgage payments as of February 2021 was 9 percentage points higher among Black borrowers and 2.2 percentage points higher among Hispanic borrowers as compared to White borrowers.
The author discusses how big data and algorithmic decision-making can compound unfairness. Past injustice can affect data used in AI/machine learning systems in two ways: by undermining the accuracy of the data itself and by producing real differences in the quality an algorithm tries to measure –like creditworthiness. Hellman argues that we must address both of these consequences of injustice, not just the first, in order to achieve algorithmic fairness.
This post examines consumers who remained in forbearance one year after the pandemic lockdowns started. The authors found that 13% of all mortgage borrowers were in forbearance for at least one month during the past year and that 35% of those participants were still in forbearance as of March 2021. More than 70% of consumers still in forbearance were not making any payments in March, suggesting that they are relatively vulnerable to serious delinquency as forbearance programs end.
This paper critiques traditional approaches to fair lending that restrict certain inputs such as the consideration of protected class information (race, gender, etc.) or that require identifying inputs that cause disparities. Based on a simulation of algorithmic lending using mortgage lending data, the author argues that focusing on inputs fails to address core discrimination concerns. She also proposes an alternative fair lending framework to address the needs of algorithmic lenders and to recognize the potential limitations of explaining complex models.