As a step toward improving the ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems, researchers at the National Institute of Standards and Technology (NIST) recommend widening the scope of where we look for the source of these biases — beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed. The recommendation is a core message of this revised NIST publication, which reflects public comments the agency received on its draft version released last summer. As part of a larger effort to support the development of trustworthy and responsible AI, the document offers guidance connected to the AI Risk Management Framework that NIST is developing.
This publication considers common types of biases in AI systems that can lead to public distrust in applications across all sectors of the economy and proposes a three-stage framework for reducing such biases. The National Institute of Standards and Technology intentionally focuses on how AI systems are designed, developed, and used and the societal context in which these systems operate rather than specific solutions for bias. As a result, its framework proposes to enable users of AI systems to identify and mitigate bias more effectively through engagement across diverse disciplines and stakeholders, including those most directly affected by biased models. This proposal represents a step by NIST towards the development of standards for trustworthy and responsible AI. NIST is accepting comments on this framework until August 5, 2021.