Recommended Reads

Latest Recommended Reads

This paper critiques traditional approaches to fair lending that restrict certain inputs such as the consideration of protected class information (race, gender, etc.) or that require identifying inputs that cause disparities. Based on a simulation of algorithmic lending using mortgage lending data, the author argues that focusing on inputs fails to address core discrimination concerns. She also proposes an alternative fair lending framework to address the needs of algorithmic lenders and to recognize the potential limitations of explaining complex models.
This paper explores how model design choices can cause or exacerbate algorithmic biases, notwithstanding the common view that data predominantly cause bias problems in machine learning systems. The author cites two important factors that constrain our ability to curb bias solely through working on the quality or scope of training data: inherent messiness in real world data and limits on accurately anticipating features in a model that can cause bias. Model designers should therefore consider how their choices about the length of model training or the use of differential privacy techniques can affect model accuracy for groups underrepresented in the data.
This article outlines the ways in which AI is being adopted by banks and describes the growing competitive pressure on these institutions to adopt AI technologies. Relevant use cases for AI include well-established applications like fraud detection as well as emerging uses like lending, where AI has potential to improve the accuracy and fairness of models, but poses more significant risks to consumers, firms, and investors.
This paper defines and differentiates between the concepts of explainability and interpretability for AI/ML systems. The author uses explainability to refer to the ability to describe the process that leads to an AI/ML algorithm’s output, and argues that it is of greater use to model developers and data scientists than interpretability. Interpretability refers to the ability to contextualize the model’s output based on its use case(s), value to the user, and other real-world factors, and is important to the users and regulators of AI/ML systems. The author argues that the recent proliferation of explainability technologies has resulted in comparatively little attention being paid to interpretability, which will be critical for emerging debates on how to regulate AI/ML systems.
This paper addresses the importance of situating explainable AI approaches within human social interactions to improve model transparency. The paper focuses on the concept of “social transparency,” which incorporates the context of those social interactions into explanations of AI systems. Interviews with AI users and practitioners ground the paper’s offering of a conceptual framework for identifying and measuring social transparency in order to improve AI decision making, increasing trust in AI, and nurturing broader values of AI explainability.
This paper analyzes the effect of forbearance programs and related credit reporting practices on consumers’ credit scores during early stages of the pandemic, using data from March through September 2020. Focusing mainly on mortgage forbearances, it finds evidence of a positive effect on consumers’ credit scores but concludes that broader improvements in credit card utilization rates particularly among consumers with low credit scores contributed more to general credit score improvements during the downturn.
This study finds that nearly 30% of total debt relief in response to the COVID-19 pandemic was provided by the private sector, with the balance provided pursuant to government mandates focusing on mortgage and student loans. Households with lower incomes and lower creditworthiness were more likely to obtain forbearance relief, as were households who live in areas with higher Black or Hispanic populations, high infection rates, and more severe economic deterioration. The authors caution that the winding down of forbearance measures and subsequent structuring of debt repayments may have a significant impact on household debt distress and the aggregate economy given the amount of accumulated postponed repayments.
Historian Jill Lepore tells the story of the Simulmatics Corporation as a case study in the Cold War origins of data science and of the technological, market, and political debates that shape our “data-mad” times. This company’s efforts throughout the 1960s to build a business on the power of prediction raises important questions about how its work affected democratic institutions, personal behavior, and conceptions of privacy.