This paper explores whether and to what degree different post hoc explainability tools provide consistent information about model behavior. It seeks to identify in specific scenarios the reasons that drive disagreement in outputs of these tools and potential ways to resolve such disagreements. The evaluation includes empirical analysis and a survey of how users of these tools contend with inconsistent outputs. The authors conclude that when explainability tools produce inconsistent information about model behavior, there are no official or consistent methods to resolving these disagreements and call for development of principled evaluation metrics to more reliably identify when such disagreements occur and their causes.
To guide responsible use of AI by financial institutions, MAS has released five white papers detailing approaches to identifying and measuring Fairness, Ethics, Accountability and Transparency (FEAT) principles in AI systems. These methodologies reflect extended collaboration with 27 institutions in the Veritas consortium. The release includes a FEAT checklist and an open-source toolkit to help users automate certain fairness metrics and visualization techniques.
This article discusses developments in artificial intelligence regulation in the United States and the European Union that are bringing these regulatory regimes into closer alignment. The author focuses on recent moves by the FTC, the EEOC, and various other federal agencies related to rulemaking around artificial intelligence, as well as a July 2021 executive order signed by President Biden and explores the potential for cooperation in terms of information sharing, joint regulatory sandboxes, and other initiatives.
The Artificial Intelligence Public-Private Forum (AIPPF) final report explores how financial services firms can address the key challenges and barriers to AI adoption, as well as mitigate any potential risks. It presents key findings and examples of practice at three levels within AI systems: Data, Model Risk, and Governance. The report is the culmination of a year-long forum that brought together a diverse group of experts from across financial services, the tech sector and academia, along with public sector observers from other UK regulators and government.
In a recent panel hosted by California 100, Stanford Institute for Economic Policy Research, and Stanford RegLab, participants discussed the current regulatory environment governing AI in California and how regulation can improve trust in AI systems. Panel members included Jeremy Weinstein, Stanford professor of political science; Jennifer Urban, UC Berkeley law professor and California Privacy Protection Agency board chair; and Ernestine Fu, California 100 commissioner and venture partner at Alsop Louie. Among other topics, the three discussed the need for algorithms to rely on high-quality data to prevent bias and the importance of providing consumers more power over the use of their data.
This paper proposes a framework for identifying, measuring, and mitigating risks related to the use of AI across a variety of sectors and use cases. This risk management proposal addresses risks and oversight activities in the design, development, use, and evaluation of AI products, services, and systems. Its aim is to promote trust in AI-driven systems and business models while preserving flexibility for continued innovation. This paper is part of NIST’s effort to develop standards for trustworthy and responsible AI.
This policy brief from leaders of the National Fair Housing Alliance and FairPlay.AI proposes specific steps regulators can take to ensure that use of AI advances financial inclusion and fairness and to improve regulatory oversight, especially in consumer lending markets.
This report examines broad implications of using AI in financial services. While recognizing the potentially significant benefits of AI for the financial system, the report argues that four types of challenges increase the importance of model transparency: data quality issues; model opacity; increased complexity in technology supply chains; and the scale of AI systems’ effects. The report suggests that model transparency has two distinct components: system transparency, where stakeholders have access to information about an AI system’s logic; and process transparency, where stakeholders have information about an AI system’s design, development, and deployment.
The authors argue that machine learning models in use cases that are highly sensitive and/or sectors that are highly regulated require inherent interpretability. The paper provides an approach for qualitatively assessing the interpretability of models based on feature effects and model architecture constraints.
This article identifies the absence of a clear definition of algorithmic bias as the primary culprit for the lack of government regulation of AI algorithms. When specific “goalposts” are established, the authors argue, regulators can provide sector and use case specific guidance, insist on appropriate risk management protocols, and hold companies responsible for instances of AI discrimination. The authors advocate for an output-based standard that focuses on identifying whether the algorithm’s prediction is accurate and equitable in order to simplify the oversight process.