This article surveys approaches for achieving interpretability in machine learning models and considers societal impacts of interpretability in sensitive, audited, and regulated deployments. The authors also propose metrics for measuring the quality of an explanation.