This paper examines AI systems and how these systems should be held accountable, in particular by one method: explanation. The paper focuses on using explanation from AI systems at the right time to improve accountability, and reviews societal, moral, and legal norms around explanation. The paper ends with advocating that at present, AI systems can and should be held responsible to a similar standard of explanation as humans are, and adapt as the future changes.