Podcasts

Relational AI – Molham Aref, CEO


We discuss causal inference in machine learning with Molham Aref – CEO of Relational.AI.

Melissa Koide

CEO & Director
FinRegLab

Prior to establishing FinRegLab, Melissa served almost five years as the U.S. Treasury Department’s Deputy Assistant Secretary for Consumer Policy. In that role, Melissa led the work to create the agency’s consumer policy positions and research on how banks and nonbanks were leveraging data and technology to improve consumers’ financial access and well-being. Melissa also helped to establish the myRA, a government offered preretirement savings account. She has testified repeatedly before the Senate Banking and House Financial Services Committees and spoken widely to policy, industry, and consumer-advocacy audiences. Melissa currently serves on the New York Fed’s Innovation Council, FINRA’s Fintech Industry Committee, and the New York State Department of Financial Services’ Financial Innovation Advisory Board.

Molham Aref

CEO
RelationalAI

Related Publications

  • Explainability & Fairness in Machine Learning for Credit Underwriting: Policy & Empirical Findings Overview

    This paper summarizes the machine learning project’s key empirical research findings and discusses the regulatory and public policy implications to be considered with the increasing use of machine learning models and explainability and fairness techniques.


  • Machine Learning Explainability & Fairness: Insights from Consumer Lending

    This empirical white paper assesses the capabilities and limitations of available model diagnostic tools in helping lenders manage machine learning underwriting models. It focuses on the tools’ production of information relevant to adverse action, fair lending, and model risk management requirements.


  • The Use of Machine Learning for Credit Underwriting: Market & Data Science Context

    This report surveys market practice with respect to the use of machine learning underwriting models and provides an overview of the current questions, debates, and regulatory frameworks that are shaping adoption and use.


  • AI FAQS: The Data Science of Explainability

    This fourth edition of our FAQs focuses on emerging techniques to explain complex models and builds on prior FAQs that covered the use of AI in financial services and the importance of model transparency and explainability in the context of machine learning credit underwriting models.


  • AI FAQs: Federated Machine Learning in Anti-Financial Crime Processes

    This third edition of our FAQs considers the technological, market, and policy implications of using federated machine learning to improve risk identification across anti-financial crime disciplines, including in customer onboarding where it may facilitate more accurate and inclusive customer due diligence.


  • Explainability and Fairness in Machine Learning for Credit Underwriting

    FinRegLab worked with a team of researchers from the Stanford Graduate School of Business to evaluate the explainability and fairness of machine learning for credit underwriting. We focused on measuring the ability of currently available model diagnostic tools to provide information about the performance and capabilities of machine learning underwriting models. This research helps stakeholders… Learn More


About FinRegLab

FinRegLab is an independent, nonprofit organization that conducts research and experiments with new technologies and data to drive the financial sector toward a safe and responsible marketplace. The organization also facilitates discourse across the financial ecosystem to inform public policy and market practices. To receive periodic updates on the latest research, subscribe to FRL’s newsletter and visit www.finreglab.org. Follow FinRegLab on LinkedIn.

FinRegLab.org | 1701 K Street Northwest, Suite 1150, Washington, DC 20006