FinRegLab, in partnership with the U.S. Department of Commerce, the National Institute of Standards and Technology (NIST), and the Stanford Institute for Human-Centered Artificial Intelligence (HAI), are hosting a symposium, bringing together leaders from government, industry, civil society, and academia to explore potential opportunities and challenges posed by artificial intelligence and machine learning deployment across different economic sectors, with a particular focus on financial services and healthcare.
The authors argue that machine learning models in use cases that are highly sensitive and/or sectors that are highly regulated require inherent interpretability. The paper provides an approach for qualitatively assessing the interpretability of models based on feature effects and model architecture constraints.