FinRegLab announced that it is conducting new empirical research to evaluate the inclusion impact of machine learning credit models, including those built with bank account data as well as traditional credit report information. The organization has also been invited by the Office of the Comptroller of the Currency (OCC) to co-chair a new Technology Working Group within the OCC’s Project REACh initiative.
This survey delves into challenges of federated machine learning beyond potential security issues that could affect adoption in industries like financial services. For example, the authors consider how asymmetric data and communications systems might make building networks between heterogenous institutions difficult and increase the costs related to uploading and downloading models or portions of models. These considerations may be especially important in underserved and emerging markets.
A recent KPMG survey of senior executives reports that the COVID-19 pandemic accelerated the rate of AI adoption across a variety of industries, including a 37% increase across various financial services uses. However, many business leaders expressed concern about this acceleration and the overall speed of adoption and welcome new guidance and regulation to foster responsible use of AI.
This study updates mortgage market developments in the use of cash-flow information from bank accounts and utility, telecommunications, and rental payments history. The report highlights issues concerning data collection, standardization, and consumer protection regulation when using non-traditional financial data sources, as well as the impact of pricing, servicing, and regulation in determining whether the use of such data sources enhances racial equity.
The authors explore the implications of model multiplicity – the phenomenon in the development of machine learning models that produces several model specifications for a given task that differ in various ways but deliver equal accuracy.
This white paper creates a framework for using synthetic data sets to assess the accuracy of interpretability techniques as applied to machine learning models in finance. The authors controlled actual feature importance using a synthetic data set and then compared the outputs of two popular interpretability techniques to determine which was better at identifying relevant features, finding variation in results.
The authors use a simulation to evaluate the impact of three different types of antidiscrimination laws in the context of non-mortgage fintech lending: laws that allow for the collection and use of credit applicants’ gender in AI underwriting models; laws that allow for the collection of gender but bar using that information as a feature in models used to extend consumer credit; and laws that prohibit the collection of such information.
Despite initial optimism that AI and machine learning systems could aid various aspects of the response to Covid-19, many did not work as successfully as anticipated. This article highlights potential reasons for underperformance of those systems, particularly those related to data. For example, machine learning tools used for outbreak detection and response to available Covid remedies did not do well in diagnosing Covid from data trained on various datasets or predicting outbreaks. Looking ahead, the authors focus on solving these challenges by merging datasets from multiple sources and clarifying international rules for data sharing.
As a step toward improving the ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems, researchers at the National Institute of Standards and Technology (NIST) recommend widening the scope of where we look for the source of these biases — beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed. The recommendation is a core message of this revised NIST publication, which reflects public comments the agency received on its draft version released last summer. As part of a larger effort to support the development of trustworthy and responsible AI, the document offers guidance connected to the AI Risk Management Framework that NIST is developing.
The CFPB recently released potential options for regulating automated valuation models for home appraisals. Under Dodd-Frank, the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA) was amended to include requirements that automated valuation models meet certain quality control standards. The proposed rules are intended to implement these standards and ensure that automated valuation models do not reflect existing biases in appraisal processes among other things. The rulemaking options outlined by the CFPB are now being examined to determine their potential impact on small businesses as a part of a required review process.