Accountability of AI Under the Law: The Role of Explanation

Read Paper

This paper examines AI systems and how these systems should be held accountable, in particular by one method: explanation. The paper focuses on using explanation from AI systems at the right time to improve accountability, and reviews societal, moral, and legal norms around explanation. The paper ends with advocating that at present, AI systems can and should be held responsible to a similar standard of explanation as humans are, and adapt as the future changes.

Berkman Center Research Publication Forthcoming; Harvard Public Law Working Paper No. 18-07

Finale Doshi-Velez, Mason Kortz, Ryan Budish, Christopher Bavitz, Samuel J. Gershman, David O’Brien, Stuart Shieber, Jim Waldo, David Weinberger, Alexandra Wood.