Recommended Reads

Latest Recommended Reads

In a recent panel hosted by California 100, Stanford Institute for Economic Policy Research, and Stanford RegLab, participants discussed the current regulatory environment governing AI in California and how regulation can improve trust in AI systems. Panel members included Jeremy Weinstein, Stanford professor of political science; Jennifer Urban, UC Berkeley law professor and California Privacy Protection Agency board chair; and Ernestine Fu, California 100 commissioner and venture partner at Alsop Louie. Among other topics, the three discussed the need for algorithms to rely on high-quality data to prevent bias and the importance of providing consumers more power over the use of their data.
This paper proposes a framework for identifying, measuring, and mitigating risks related to the use of AI across a variety of sectors and use cases. This risk management proposal addresses risks and oversight activities in the design, development, use, and evaluation of AI products, services, and systems. Its aim is to promote trust in AI-driven systems and business models while preserving flexibility for continued innovation. This paper is part of NIST’s effort to develop standards for trustworthy and responsible AI.
This policy brief from leaders of the National Fair Housing Alliance and FairPlay.AI proposes specific steps regulators can take to ensure that use of AI advances financial inclusion and fairness and to improve regulatory oversight, especially in consumer lending markets.
This report examines broad implications of using AI in financial services. While recognizing the potentially significant benefits of AI for the financial system, the report argues that four types of challenges increase the importance of model transparency: data quality issues; model opacity; increased complexity in technology supply chains; and the scale of AI systems’ effects. The report suggests that model transparency has two distinct components: system transparency, where stakeholders have access to information about an AI system’s logic; and process transparency, where stakeholders have information about an AI system’s design, development, and deployment.
This article identifies the absence of a clear definition of algorithmic bias as the primary culprit for the lack of government regulation of AI algorithms. When specific “goalposts” are established, the authors argue, regulators can provide sector and use case specific guidance, insist on appropriate risk management protocols, and hold companies responsible for instances of AI discrimination. The authors advocate for an output-based standard that focuses on identifying whether the algorithm’s prediction is accurate and equitable in order to simplify the oversight process.
The director and deputy director of the White House Office of Science and Technology Policy argue that, given the growth of AI technologies used for everything from hiring to determining creditworthiness, the United States needs a new AI “bill of rights” to articulate the rights and freedoms that individuals should enjoy in an AI and data-driven world. The Office is currently working on developing such a bill and has issued a public request for information about new and developing AI technologies that affect the daily lives of Americans.
Against the backdrop of growing adoption of algorithmic decision-making, a team of researchers from the Financial Conduct Authority simulates the transition from logistic regression credit scoring models to ensemble machine learning models using credit file data for 800,000 UK borrowers. They find that machine learning credit models are more accurate and that machine learning models neither amplify nor eliminate bias where fairness criteria focus on overall accuracy and error rates for subgroups defined by race, gender, and other protect class characteristics.
This study focuses on deployment of AI around the world to inform emerging market norms and policy and to focus research on critical issues. It finds that in the five years since a previous report, the field of AI has made substantial progress in affecting real-world decision making and everyday life. These advances have enhanced focus on the legal, regulatory, and ethical challenges of responsibly deploying AI systems. The report calls for “sustained investment of time and resources” from government institutions to prepare for and foster “an equitable AI-infused world.”