Recommended Reads

Latest Recommended Reads

This report surveys widely discussed mechanisms for promoting fairness, accountability, and transparency (FAT) of algorithmic systems and assesses options available to governments, internet platforms, and other stakeholders for promoting these characteristics. The authors call for greater focus on understanding how various mechanisms can be used in concert and for developing comprehensive FAT policies and standards.
This article identifies and explores a gap between commonly used statistical measures of fairness and rulings and evidentiary standards of the European Court of Justice. The authors suggest that current standards to bring discrimination claims limit the potential for a standardized system of addressing algorithmic discrimination in the EU because they are too contextual and open to interpretation. Additionally, the authors argue that law provides little guidance on addressing cases when algorithms, not humans, are the discriminators. The authors propose conditional demographic disparity as an appropriate statistical measure of fairness to harmonize legal and industry perspectives.
The Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation, and the Office of the Comptroller of the Currency released a guide for community banks on conducting diligence of financial technology companies. Drawing on existing guidance that articulates risk management expectations for third-party relationships, the guide highlights areas where required diligence processes can be adapted to reflect constraints related to doing business with early or expansion stage companies.
This article reports on an investigation by a team of journalists that found that applicants of color were significantly more likely than White applicants to be denied home loans based on nationwide 2019 data. In an analysis of more than 2 million loan applications, this disparity ranged from 40% more likely for Latino applicants to 80% more likely for Black applicants, despite comparable metrics and credit scores. Given these findings, the article questions the use of traditional credit scoring models and automated underwriting systems.
This article focuses on the role of open-source software (OSS) in the adoption and use of AI and machine learning and argues that this critical infrastructure is not subject to adequate oversight. The significance of OSS is clear: it helps speed AI adoption, reduce AI bias through means such as open-source explainable AI tools, and can improve tech sector competitiveness. However, OSS tools also carry risks, namely reducing competitiveness and giving a small number of technology companies an outsized role in determining AI standards. This paper contends that increasing oversight of OSS tools is a critical step in emerging efforts to define fair and responsible use of AI.
This paper surveys existing and emerging frameworks for AI governance and points to common emphases with respect to reliability, transparency, accountability, and fairness. The authors argue that use of AI has intensified concerns about fairness and call for development of more specific and comprehensive standards by national, regional, and global standard-setting bodies to define fairness and to clarify the role of human intervention in the development and use of AI models in the financial system.
This report incorporates analysis of existing literature and interviews with experts and various stakeholders to determine how automated systems can best support and include traditionally marginalized populations. The report focuses on the problem of algorithmic bias embedded in data and systems. The report proposes a “Digital Bill of Rights” that articulates seven core rights designed to ensure that systems meet expectations for fairness, accountability, and transparency.
This paper finds that mortgage refinancing benefits from lower interest rates during the pandemic have not been shared equally among racial and ethnic groups. Based on a sample of 5 million mortgages, the authors estimate that only 6% of Black borrowers and 9% of Hispanic borrowers refinanced between January and October 2020, compared with almost 12% of White borrowers. Among those who experienced distress during the peak months of May and June 2020, the percent who were still behind on their mortgage payments as of February 2021 was 9 percentage points higher among Black borrowers and 2.2 percentage points higher among Hispanic borrowers as compared to White borrowers.
This publication considers common types of biases in AI systems that can lead to public distrust in applications across all sectors of the economy and proposes a three-stage framework for reducing such biases. The National Institute of Standards and Technology intentionally focuses on how AI systems are designed, developed, and used and the societal context in which these systems operate rather than specific solutions for bias. As a result, its framework proposes to enable users of AI systems to identify and mitigate bias more effectively through engagement across diverse disciplines and stakeholders, including those most directly affected by biased models. This proposal represents a step by NIST towards the development of standards for trustworthy and responsible AI. NIST is accepting comments on this framework until August 5, 2021.
The author discusses how big data and algorithmic decision-making can compound unfairness. Past injustice can affect data used in AI/machine learning systems in two ways: by undermining the accuracy of the data itself and by producing real differences in the quality an algorithm tries to measure –like creditworthiness. Hellman argues that we must address both of these consequences of injustice, not just the first, in order to achieve algorithmic fairness.