Recommended Reads

Veritas Industry Consortium Assessment Methodologies for Responsible Use of AI by Financial Institutions

Read Article

To guide responsible use of AI by financial institutions, MAS has released five white papers detailing approaches to identifying and measuring Fairness, Ethics, Accountability and Transparency (FEAT) principles in AI systems. These methodologies reflect extended collaboration with 27 institutions in the Veritas consortium. The release includes a FEAT checklist and an open-source toolkit to help users automate certain fairness metrics and visualization techniques.

Monetary Authority of Singapore (MAS)

An AI Fair Lending Policy Agenda for the Federal Financial Regulators

Read Brief

This policy brief from leaders of the National Fair Housing Alliance and FairPlay.AI proposes specific steps regulators can take to ensure that use of AI advances financial inclusion and fairness and to improve regulatory oversight, especially in consumer lending markets.

Michael Akinwumi, John Merrill, Lisa Rice, Kareem Saleh, and Maureen Yap, Brookings Center on Regulation and Markets

Big Data and Compounding Injustice

Read Paper

The author discusses how big data and algorithmic decision-making can compound unfairness. Past injustice can affect data used in AI/machine learning systems in two ways: by undermining the accuracy of the data itself and by producing real differences in the quality an algorithm tries to measure –like creditworthiness. Hellman argues that we must address both of these consequences of injustice, not just the first, in order to achieve algorithmic fairness.

Deborah Hellman, J. of Moral Philosophy (forthcoming)

Cracking Open the Black Box Promoting Fairness, Accountability, and Transparency Around High-Risk AI

Read Report

This report surveys widely discussed mechanisms for promoting fairness, accountability, and transparency (FAT) of algorithmic systems and assesses options available to governments, internet platforms, and other stakeholders for promoting these characteristics. The authors call for greater focus on understanding how various mechanisms can be used in concert and for developing comprehensive FAT policies and standards.

Spandana Singh and Leila Doty, New America

AI Adoption Accelerated During the Pandemic but Many Say It’s Moving Too Fast: KPMG Survey

Read Survey

A recent KPMG survey of senior executives reports that the COVID-19 pandemic accelerated the rate of AI adoption across a variety of industries, including a 37% increase across various financial services uses. However, many business leaders expressed concern about this acceleration and the overall speed of adoption and welcome new guidance and regulation to foster responsible use of AI.

Melanie Malluk Batley, KPMG

AI Risk Management Framework Concept Paper

Read Paper

This paper proposes a framework for identifying, measuring, and mitigating risks related to the use of AI across a variety of sectors and use cases. This risk management proposal addresses risks and oversight activities in the design, development, use, and evaluation of AI products, services, and systems. Its aim is to promote trust in AI-driven systems and business models while preserving flexibility for continued innovation. This paper is part of NIST’s effort to develop standards for trustworthy and responsible AI.

National Institute of Standards and Technology (NIST)

To Stop Algorithmic Bias, We First Have to Define It

Read Article

This article identifies the absence of a clear definition of algorithmic bias as the primary culprit for the lack of government regulation of AI algorithms. When specific “goalposts” are established, the authors argue, regulators can provide sector and use case specific guidance, insist on appropriate risk management protocols, and hold companies responsible for instances of AI discrimination. The authors advocate for an output-based standard that focuses on identifying whether the algorithm’s prediction is accurate and equitable in order to simplify the oversight process.

Emily Bembeneck, Rebecca Nissan & Ziad Obermeyer, Brookings

Algorithmic Fairness in Credit Scoring

Read Article

Against the backdrop of growing adoption of algorithmic decision-making, a team of researchers from the Financial Conduct Authority simulates the transition from logistic regression credit scoring models to ensemble machine learning models using credit file data for 800,000 UK borrowers. They find that machine learning credit models are more accurate and that machine learning models neither amplify nor eliminate bias where fairness criteria focus on overall accuracy and error rates for subgroups defined by race, gender, and other protect class characteristics.

Teresa Bono, Karen Croxson & Adam Giles, Oxford Rev. of Econ. Pol’y

Americans Need a Bill of Rights for an AI-Powered World

Read Article

The director and deputy director of the White House Office of Science and Technology Policy argue that, given the growth of AI technologies used for everything from hiring to determining creditworthiness, the United States needs a new AI “bill of rights” to articulate the rights and freedoms that individuals should enjoy in an AI and data-driven world. The Office is currently working on developing such a bill and has issued a public request for information about new and developing AI technologies that affect the daily lives of Americans.

Eric Lander & Alondra Nelson, Wired

Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report

Read Study

This study focuses on deployment of AI around the world to inform emerging market norms and policy and to focus research on critical issues. It finds that in the five years since a previous report, the field of AI has made substantial progress in affecting real-world decision making and everyday life. These advances have enhanced focus on the legal, regulatory, and ethical challenges of responsibly deploying AI systems. The report calls for “sustained investment of time and resources” from government institutions to prepare for and foster “an equitable AI-infused world.”

Michael L. Littman et al., Stanford University

Translate »