The Brookings Institute

Related Content For The Brookings Institute


This article discusses developments in artificial intelligence regulation in the United States and the European Union that are bringing these regulatory regimes into closer alignment. The author focuses on recent moves by the FTC, the EEOC, and various other federal agencies related to rulemaking around artificial intelligence, as well as a July 2021 executive order signed by President Biden and explores the potential for cooperation in terms of information sharing, joint regulatory sandboxes, and other initiatives.
This article identifies the absence of a clear definition of algorithmic bias as the primary culprit for the lack of government regulation of AI algorithms. When specific “goalposts” are established, the authors argue, regulators can provide sector and use case specific guidance, insist on appropriate risk management protocols, and hold companies responsible for instances of AI discrimination. The authors advocate for an output-based standard that focuses on identifying whether the algorithm’s prediction is accurate and equitable in order to simplify the oversight process.
This article focuses on the role of open-source software (OSS) in the adoption and use of AI and machine learning and argues that this critical infrastructure is not subject to adequate oversight. The significance of OSS is clear: it helps speed AI adoption, reduce AI bias through means such as open-source explainable AI tools, and can improve tech sector competitiveness. However, OSS tools also carry risks, namely reducing competitiveness and giving a small number of technology companies an outsized role in determining AI standards. This paper contends that increasing oversight of OSS tools is a critical step in emerging efforts to define fair and responsible use of AI.
The author considers the complexity of using algorithmic decision-making in policy-sensitive areas, like determining criminal bail and sentences and welfare benefits claims and argues that advances in explainability techniques are necessary, but not sufficient, for resolving key questions about such decisions. She argues that the inherent complexity of the most powerful AI models and our inability to reduce law and regulation to clearly stated optimization goals for the algorithm reinforce the need for transparent governance by model users, especially when they are government agencies..