This article surveys approaches for achieving interpretability in machine learning models and considers societal impacts of interpretability in sensitive, audited, and regulated deployments. The authors also propose metrics for measuring the quality of an explanation.
This paper provides an overview of explainable machine learning as well as definitions of explainable artificial intelligence, examples of its usage, and details for responsible and human-centered use.
This paper provides a nontechnical overview of common machine learning algorithms used in underwriting credit. It provides a review – including strengths and weakness – of common machine learning techniques in credit underwriting, including tree-based models, support vector machines, and neural networks. The paper also considers the financial inclusion implications of machine learning, nontraditional data, and fintech.
This paper provides an overview of the challenges and implications for the supervision and regulation of financial services with regards to the opportunities presented by BDAI technology: the phenomena of big data (BD) being used in conjunction with artificial intelligence (AI). The paper draws from market analyses and use cases to outline potential developments seen from the industry and government perspectives, and the impact on consumers.
This paper maps twenty definitions of fairness for algorithmic classification problems, explains the rationale for each definition, and applies them in the context of a single case study. This analysis demonstrates that the same fact pattern can be considered fair or unfair depending on the definition being applied.
This paper uses various types of machine learning models to predict credit risk using historical mortgage data. It finds gains in predictiveness that would likely lead to an increase in approvals across all demographic groups, but that average prices could increase for African-American and Hispanic borrowers due to differences in risk calculations.
This paper examines AI systems and how these systems should be held accountable, in particular by one method: explanation. The paper focuses on using explanation from AI systems at the right time to improve accountability, and reviews societal, moral, and legal norms around explanation. The paper ends with advocating that at present, AI systems can and should be held responsible to a similar standard of explanation as humans are, and adapt as the future changes.