Category: AI/Machine Learning

The EU and U.S. are starting to align on AI regulation

Read Article

This article discusses developments in artificial intelligence regulation in the United States and the European Union that are bringing these regulatory regimes into closer alignment. The author focuses on recent moves by the FTC, the EEOC, and various other federal agencies related to rulemaking around artificial intelligence, as well as a July 2021 executive order signed by President Biden and explores the potential for cooperation in terms of information sharing, joint regulatory sandboxes, and other initiatives.

Alex Engler, Brookings

Algorithms, Privacy, and the Future of Tech Regulation in California

Read Article

In a recent panel hosted by California 100, Stanford Institute for Economic Policy Research, and Stanford RegLab, participants discussed the current regulatory environment governing AI in California and how regulation can improve trust in AI systems. Panel members included Jeremy Weinstein, Stanford professor of political science; Jennifer Urban, UC Berkeley law professor and California Privacy Protection Agency board chair; and Ernestine Fu, California 100 commissioner and venture partner at Alsop Louie. Among other topics, the three discussed the need for algorithms to rely on high-quality data to prevent bias and the importance of providing consumers more power over the use of their data.

Sachin Waikar, Stanford Institute for Human-Centered Artificial Intelligence

The AI Public-Private Forum Final Report

Read Report

The Artificial Intelligence Public-Private Forum (AIPPF) final report explores how financial services firms can address the key challenges and barriers to AI adoption, as well as mitigate any potential risks. It presents key findings and examples of practice at three levels within AI systems: Data, Model Risk, and Governance. The report is the culmination of a year-long forum that brought together a diverse group of experts from across financial services, the tech sector and academia, along with public sector observers from other UK regulators and government.

The Bank of England and the Financial Conduct Authority

Veritas Industry Consortium Assessment Methodologies for Responsible Use of AI by Financial Institutions

Read Article

To guide responsible use of AI by financial institutions, MAS has released five white papers detailing approaches to identifying and measuring Fairness, Ethics, Accountability and Transparency (FEAT) principles in AI systems. These methodologies reflect extended collaboration with 27 institutions in the Veritas consortium. The release includes a FEAT checklist and an open-source toolkit to help users automate certain fairness metrics and visualization techniques.

Monetary Authority of Singapore (MAS)

An AI Fair Lending Policy Agenda for the Federal Financial Regulators

Read Brief

This policy brief from leaders of the National Fair Housing Alliance and FairPlay.AI proposes specific steps regulators can take to ensure that use of AI advances financial inclusion and fairness and to improve regulatory oversight, especially in consumer lending markets.

Michael Akinwumi, John Merrill, Lisa Rice, Kareem Saleh, and Maureen Yap, Brookings Center on Regulation and Markets

Big Data and Compounding Injustice

Read Paper

The author discusses how big data and algorithmic decision-making can compound unfairness. Past injustice can affect data used in AI/machine learning systems in two ways: by undermining the accuracy of the data itself and by producing real differences in the quality an algorithm tries to measure –like creditworthiness. Hellman argues that we must address both of these consequences of injustice, not just the first, in order to achieve algorithmic fairness.

Deborah Hellman, J. of Moral Philosophy (forthcoming)

Cracking Open the Black Box Promoting Fairness, Accountability, and Transparency Around High-Risk AI

Read Report

This report surveys widely discussed mechanisms for promoting fairness, accountability, and transparency (FAT) of algorithmic systems and assesses options available to governments, internet platforms, and other stakeholders for promoting these characteristics. The authors call for greater focus on understanding how various mechanisms can be used in concert and for developing comprehensive FAT policies and standards.

Spandana Singh and Leila Doty, New America

AI Adoption Accelerated During the Pandemic but Many Say It’s Moving Too Fast: KPMG Survey

Read Survey

A recent KPMG survey of senior executives reports that the COVID-19 pandemic accelerated the rate of AI adoption across a variety of industries, including a 37% increase across various financial services uses. However, many business leaders expressed concern about this acceleration and the overall speed of adoption and welcome new guidance and regulation to foster responsible use of AI.

Melanie Malluk Batley, KPMG

AI Risk Management Framework Concept Paper

Read Paper

This paper proposes a framework for identifying, measuring, and mitigating risks related to the use of AI across a variety of sectors and use cases. This risk management proposal addresses risks and oversight activities in the design, development, use, and evaluation of AI products, services, and systems. Its aim is to promote trust in AI-driven systems and business models while preserving flexibility for continued innovation. This paper is part of NIST’s effort to develop standards for trustworthy and responsible AI.

National Institute of Standards and Technology (NIST)

To Stop Algorithmic Bias, We First Have to Define It

Read Article

This article identifies the absence of a clear definition of algorithmic bias as the primary culprit for the lack of government regulation of AI algorithms. When specific “goalposts” are established, the authors argue, regulators can provide sector and use case specific guidance, insist on appropriate risk management protocols, and hold companies responsible for instances of AI discrimination. The authors advocate for an output-based standard that focuses on identifying whether the algorithm’s prediction is accurate and equitable in order to simplify the oversight process.

Emily Bembeneck, Rebecca Nissan & Ziad Obermeyer, Brookings

Translate »