Category: AI/Machine Learning

Accuracy of Explanations of Machine Learning Models for Credit Decisions

Read Paper

This white paper creates a framework for using synthetic data sets to assess the accuracy of interpretability techniques as applied to machine learning models in finance. The authors controlled actual feature importance using a synthetic data set and then compared the outputs of two popular interpretability techniques to determine which was better at identifying relevant features, finding variation in results.

Andrés Alonso and José Manuel Carbó, Banco de España

Reducing the Black-White Homeownership Gap through Underwriting Innovations

Read Report

This study updates mortgage market developments in the use of cash-flow information from bank accounts and utility, telecommunications, and rental payments history. The report highlights issues concerning data collection, standardization, and consumer protection regulation when using non-traditional financial data sources, as well as the impact of pricing, servicing, and regulation in determining whether the use of such data sources enhances racial equity.

Jung Hyun Choi et al., Urban Institute

Designing Inherently Interpretable Machine Learning Models

Read Paper

The authors argue that machine learning models in use cases that are highly sensitive and/or sectors that are highly regulated require inherent interpretability. The paper provides an approach for qualitatively assessing the interpretability of models based on feature effects and model architecture constraints.

Agus Sudjianto and Aijun Zhang; Cornell University

Model Multiplicity: Opportunities, Concerns, and Solutions

Read Paper

The authors explore the implications of model multiplicity – the phenomenon in the development of machine learning models that produces several model specifications for a given task that differ in various ways but deliver equal accuracy.

Emily Black, Manish Rhagavan, and Solon Barocas; FAccT ’22

Antidiscrimination Laws, Artificial Intelligence, and Gender Bias: A Case Study in Nonmortgage Fintech Lending

Read Paper

The authors use a simulation to evaluate the impact of three different types of antidiscrimination laws in the context of non-mortgage fintech lending: laws that allow for the collection and use of credit applicants’ gender in AI underwriting models; laws that allow for the collection of gender but bar using that information as a feature in models used to extend consumer credit; and laws that prohibit the collection of such information.

Stephanie Kelley, Anton Ovchinnikov, David R. Hardoon; Manufacturing & Services Operation Management

Machine-Learning the Skill of Mutual Fund Managers

Read Report

This paper evaluates the reliability of neural networks in actively managed mutual fund applications. The authors conclude that neural networks identify important interaction effects that are not apparent to linear models and offer predictability that is “real-time, out-of-sample, long-lived, and economically meaningful.”

Ron Kaniel, Zihan Lin, Markus Pelger & Stijn Van Nieuwerburgh, NBER

The Disagreement Problem in Explainable Machine Learning: A Practitioner’s Perspective

Read Paper

This paper explores whether and to what degree different post hoc explainability tools provide consistent information about model behavior. It seeks to identify in specific scenarios the reasons that drive disagreement in outputs of these tools and potential ways to resolve such disagreements. The evaluation includes empirical analysis and a survey of how users of these tools contend with inconsistent outputs. The authors conclude that when explainability tools produce inconsistent information about model behavior, there are no official or consistent methods to resolving these disagreements and call for development of principled evaluation metrics to more reliably identify when such disagreements occur and their causes.

Satyapriya Krishna, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Zhiwei Steven Wu, and Himabindu Lakkaraju, Arxiv

Why AI Failed to Live Up to Its Potential During the Pandemic

Read Article

Despite initial optimism that AI and machine learning systems could aid various aspects of the response to Covid-19, many did not work as successfully as anticipated. This article highlights potential reasons for underperformance of those systems, particularly those related to data. For example, machine learning tools used for outbreak detection and response to available Covid remedies did not do well in diagnosing Covid from data trained on various datasets or predicting outbreaks. Looking ahead, the authors focus on solving these challenges by merging datasets from multiple sources and clarifying international rules for data sharing.

Bhaskar Chakravorti, Harvard Business Review

Towards a Standard for Identifying and Managing Bias in Artificial Intelligence

Read Report

As a step toward improving the ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems, researchers at the National Institute of Standards and Technology (NIST) recommend widening the scope of where we look for the source of these biases — beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed. The recommendation is a core message of this revised NIST publication, which reflects public comments the agency received on its draft version released last summer. As part of a larger effort to support the development of trustworthy and responsible AI, the document offers guidance connected to the AI Risk Management Framework that NIST is developing.

NIST

Consumer Financial Protection Bureau Outlines Options To Prevent Algorithmic Bias In Home Valuations

Read Outline

The CFPB recently released potential options for regulating automated valuation models for home appraisals. Under Dodd-Frank, the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA) was amended to include requirements that automated valuation models meet certain quality control standards. The proposed rules are intended to implement these standards and ensure that automated valuation models do not reflect existing biases in appraisal processes among other things. The rulemaking options outlined by the CFPB are now being examined to determine their potential impact on small businesses as a part of a required review process.

CFPB

Translate »