National Institute of Standards and Technology (NIST)

Related Content For National Institute of Standards and Technology (NIST)


As a step toward improving the ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems, researchers at the National Institute of Standards and Technology (NIST) recommend widening the scope of where we look for the source of these biases — beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed. The recommendation is a core message of this revised NIST publication, which reflects public comments the agency received on its draft version released last summer. As part of a larger effort to support the development of trustworthy and responsible AI, the document offers guidance connected to the AI Risk Management Framework that NIST is developing.
This paper proposes a framework for identifying, measuring, and mitigating risks related to the use of AI across a variety of sectors and use cases. This risk management proposal addresses risks and oversight activities in the design, development, use, and evaluation of AI products, services, and systems. Its aim is to promote trust in AI-driven systems and business models while preserving flexibility for continued innovation. This paper is part of NIST’s effort to develop standards for trustworthy and responsible AI.
This publication considers common types of biases in AI systems that can lead to public distrust in applications across all sectors of the economy and proposes a three-stage framework for reducing such biases. The National Institute of Standards and Technology intentionally focuses on how AI systems are designed, developed, and used and the societal context in which these systems operate rather than specific solutions for bias. As a result, its framework proposes to enable users of AI systems to identify and mitigate bias more effectively through engagement across diverse disciplines and stakeholders, including those most directly affected by biased models. This proposal represents a step by NIST towards the development of standards for trustworthy and responsible AI. NIST is accepting comments on this framework until August 5, 2021.
This paper defines and differentiates between the concepts of explainability and interpretability for AI/ML systems. The author uses explainability to refer to the ability to describe the process that leads to an AI/ML algorithm’s output, and argues that it is of greater use to model developers and data scientists than interpretability. Interpretability refers to the ability to contextualize the model’s output based on its use case(s), value to the user, and other real-world factors, and is important to the users and regulators of AI/ML systems. The author argues that the recent proliferation of explainability technologies has resulted in comparatively little attention being paid to interpretability, which will be critical for emerging debates on how to regulate AI/ML systems.