This article addresses concerns about the ethical use of AI algorithms given their prominence in so many facets of daily life.The author argues that distinguishing the trustworthiness of claims made about an algorithm from those made by an algorithm can improve how we evaluate individual algorithms or uses and promote ‘intelligent transparency.’ He proposes a four-part framework inspired by pharmaceutical development for evaluating the trustworthiness of algorithms.
This research paper analyzed whether unstructured digital data can substitute for traditional credit bureau scores with an analysis of loan-level data from a large Indian fintech firm. The researchers found that evaluating creditworthiness based on social and mobile footprints can potentially expand credit access. Variables found to significantly improve default prediction and outperform credit bureau scores include the number and types of apps installed, metrics of the applicant’s social connectivity, and measures of borrowers’ “deep social footprints” derived from call logs.
This report from an explainable AI competition raises the question whether model developers need to rely on “black box” machine learning techniques or can meet their needs using more interpretable forms of machine learning.
This paper describes the efforts of a team of researchers to develop a federated AML model for the UK Financial Conduct Authority’s Global Anti-Money-Laundering and Financial Crime Tech sprint. The model was trained on data from several financial institutions and outperformed a conventional AML model in detecting potentially suspicious activity by 20%.
This article examines alternative fairness metrics from conceptual and normative perspectives with particular attention paid to predictive parity and error rate ratios. The article also questions the common view that anti-discrimination law prevents model developers from using race, gender, or other protected characteristics to improve the fairness and accuracy of the algorithms that they design.
This article surveys approaches for achieving interpretability in machine learning models and considers societal impacts of interpretability in sensitive, audited, and regulated deployments. The authors also propose metrics for measuring the quality of an explanation.
This paper provides an overview of explainable machine learning as well as definitions of explainable artificial intelligence, examples of its usage, and details for responsible and human-centered use.
This report analyzes mortgage default using information taken from the JPMorgan Chase Institute housing finance research to evaluate the relationship between liquidity, equity, income level, and payment burden and default. Across all four groups, the report finds that liquidity may be more predictive for determining the likelihood of mortgage default particularly among borrowers with little post-closing liquidity and little liquidity but high equity. Overall, the report determines that alternative underwriting standards incorporating a minimum amount of post-closing liquidity may be a more effective way to prevent mortgage default compared to using DTI thresholds at origination.
This paper provides a nontechnical overview of common machine learning algorithms used in underwriting credit. It provides a review – including strengths and weakness – of common machine learning techniques in credit underwriting, including tree-based models, support vector machines, and neural networks. The paper also considers the financial inclusion implications of machine learning, nontraditional data, and fintech.
This essay discusses the legal requirements of pricing credit and the architecture of machine learning and intelligent algorithms to provide an overview of legislative gaps, legal solutions, and a framework for testing discrimination that evaluates algorithmic pricing rules. Using real-world mortgage data, the authors find that restricting the data characteristics within the algorithm can increase pricing gaps while having a limited impact on disparity.