Recommended Reads

Model Multiplicity: Opportunities, Concerns, and Solutions

Read Paper

The authors explore the implications of model multiplicity – the phenomenon in the development of machine learning models that produces several model specifications for a given task that differ in various ways but deliver equal accuracy.

Emily Black, Manish Rhagavan, and Solon Barocas; FAccT ’22

Antidiscrimination Laws, Artificial Intelligence, and Gender Bias: A Case Study in Nonmortgage Fintech Lending

Read Paper

The authors use a simulation to evaluate the impact of three different types of antidiscrimination laws in the context of non-mortgage fintech lending: laws that allow for the collection and use of credit applicants’ gender in AI underwriting models; laws that allow for the collection of gender but bar using that information as a feature in models used to extend consumer credit; and laws that prohibit the collection of such information.

Stephanie Kelley, Anton Ovchinnikov, David R. Hardoon; Manufacturing & Services Operation Management

Machine-Learning the Skill of Mutual Fund Managers

Read Report

This paper evaluates the reliability of neural networks in actively managed mutual fund applications. The authors conclude that neural networks identify important interaction effects that are not apparent to linear models and offer predictability that is “real-time, out-of-sample, long-lived, and economically meaningful.”

Ron Kaniel, Zihan Lin, Markus Pelger & Stijn Van Nieuwerburgh, NBER

The Disagreement Problem in Explainable Machine Learning: A Practitioner’s Perspective

Read Paper

This paper explores whether and to what degree different post hoc explainability tools provide consistent information about model behavior. It seeks to identify in specific scenarios the reasons that drive disagreement in outputs of these tools and potential ways to resolve such disagreements. The evaluation includes empirical analysis and a survey of how users of these tools contend with inconsistent outputs. The authors conclude that when explainability tools produce inconsistent information about model behavior, there are no official or consistent methods to resolving these disagreements and call for development of principled evaluation metrics to more reliably identify when such disagreements occur and their causes.

Satyapriya Krishna, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Zhiwei Steven Wu, and Himabindu Lakkaraju, Arxiv

Why AI Failed to Live Up to Its Potential During the Pandemic

Read Article

Despite initial optimism that AI and machine learning systems could aid various aspects of the response to Covid-19, many did not work as successfully as anticipated. This article highlights potential reasons for underperformance of those systems, particularly those related to data. For example, machine learning tools used for outbreak detection and response to available Covid remedies did not do well in diagnosing Covid from data trained on various datasets or predicting outbreaks. Looking ahead, the authors focus on solving these challenges by merging datasets from multiple sources and clarifying international rules for data sharing.

Bhaskar Chakravorti, Harvard Business Review

Towards a Standard for Identifying and Managing Bias in Artificial Intelligence

Read Report

As a step toward improving the ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems, researchers at the National Institute of Standards and Technology (NIST) recommend widening the scope of where we look for the source of these biases — beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed. The recommendation is a core message of this revised NIST publication, which reflects public comments the agency received on its draft version released last summer. As part of a larger effort to support the development of trustworthy and responsible AI, the document offers guidance connected to the AI Risk Management Framework that NIST is developing.

NIST

Consumer Financial Protection Bureau Outlines Options To Prevent Algorithmic Bias In Home Valuations

Read Outline

The CFPB recently released potential options for regulating automated valuation models for home appraisals. Under Dodd-Frank, the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA) was amended to include requirements that automated valuation models meet certain quality control standards. The proposed rules are intended to implement these standards and ensure that automated valuation models do not reflect existing biases in appraisal processes among other things. The rulemaking options outlined by the CFPB are now being examined to determine their potential impact on small businesses as a part of a required review process.

CFPB

The EU and U.S. are starting to align on AI regulation

Read Article

This article discusses developments in artificial intelligence regulation in the United States and the European Union that are bringing these regulatory regimes into closer alignment. The author focuses on recent moves by the FTC, the EEOC, and various other federal agencies related to rulemaking around artificial intelligence, as well as a July 2021 executive order signed by President Biden and explores the potential for cooperation in terms of information sharing, joint regulatory sandboxes, and other initiatives.

Alex Engler, Brookings

Algorithms, Privacy, and the Future of Tech Regulation in California

Read Article

In a recent panel hosted by California 100, Stanford Institute for Economic Policy Research, and Stanford RegLab, participants discussed the current regulatory environment governing AI in California and how regulation can improve trust in AI systems. Panel members included Jeremy Weinstein, Stanford professor of political science; Jennifer Urban, UC Berkeley law professor and California Privacy Protection Agency board chair; and Ernestine Fu, California 100 commissioner and venture partner at Alsop Louie. Among other topics, the three discussed the need for algorithms to rely on high-quality data to prevent bias and the importance of providing consumers more power over the use of their data.

Sachin Waikar, Stanford Institute for Human-Centered Artificial Intelligence

The AI Public-Private Forum Final Report

Read Report

The Artificial Intelligence Public-Private Forum (AIPPF) final report explores how financial services firms can address the key challenges and barriers to AI adoption, as well as mitigate any potential risks. It presents key findings and examples of practice at three levels within AI systems: Data, Model Risk, and Governance. The report is the culmination of a year-long forum that brought together a diverse group of experts from across financial services, the tech sector and academia, along with public sector observers from other UK regulators and government.

The Bank of England and the Financial Conduct Authority

Translate »