FinRegLab announced that it is conducting new empirical research to evaluate the inclusion impact of machine learning credit models, including those built with bank account data as well as traditional credit report information. The organization has also been invited by the Office of the Comptroller of the Currency (OCC) to co-chair a new Technology Working Group within the OCC’s Project REACh initiative.
This policy analysis explores the regulatory and public policy implications of the increasing use of machine learning models and explainability and fairness techniques for credit underwriting in depth, particularly for model risk management, consumer disclosures, and fair lending compliance.
This paper summarizes the machine learning project’s key empirical research findings and discusses the regulatory and public policy implications to be considered with the increasing use of machine learning models and explainability and fairness techniques.
This empirical white paper assesses the capabilities and limitations of available model diagnostic tools in helping lenders manage machine learning underwriting models. It focuses on the tools’ production of information relevant to adverse action, fair lending, and model risk management requirements.
This report surveys market practice with respect to the use of machine learning underwriting models and provides an overview of the current questions, debates, and regulatory frameworks that are shaping adoption and use.
Publications
AI FAQS: The Data Science of Explainability
This fourth edition of our FAQs focuses on emerging techniques to explain complex models and builds on prior FAQs that covered the use of AI in financial services and the importance of model transparency and explainability in the context of machine learning credit underwriting models.
FinRegLab worked with a team of researchers from the Stanford Graduate School of Business to evaluate the explainability and fairness of machine learning for credit underwriting. We focused on measuring the ability of currently available model diagnostic tools to provide information about the performance and capabilities of machine learning underwriting models. This research helps stakeholders assess how machine learning models can be developed and used in compliance with regulatory expectations regarding model risk management, anti-discrimination, and adverse action reporting.
This third edition of our FAQs considers the technological, market, and policy implications of using federated machine learning to improve risk identification across anti-financial crime disciplines, including in customer onboarding where it may facilitate more accurate and inclusive customer due diligence.
Publications
AI FAQS: Explainability in Credit Underwriting
This second edition of our FAQs considers more deeply issues and debates about model transparency, explainability, and the implications of using machine learning for credit underwriting.
Publications
AI FAQS: Key Concepts
This first edition of our FAQs addresses a range of introductory questions about AI and machine learning.