The authors argue that machine learning models in use cases that are highly sensitive and/or sectors that are highly regulated require inherent interpretability. The paper provides an approach for qualitatively assessing the interpretability of models based on feature effects and model architecture constraints.