AI/Machine Learning

Latest AI/Machine Learning

The director and deputy director of the White House Office of Science and Technology Policy argue that, given the growth of AI technologies used for everything from hiring to determining creditworthiness, the United States needs a new AI “bill of rights” to articulate the rights and freedoms that individuals should enjoy in an AI and data-driven world. The Office is currently working on developing such a bill and has issued a public request for information about new and developing AI technologies that affect the daily lives of Americans.
Against the backdrop of growing adoption of algorithmic decision-making, a team of researchers from the Financial Conduct Authority simulates the transition from logistic regression credit scoring models to ensemble machine learning models using credit file data for 800,000 UK borrowers. They find that machine learning credit models are more accurate and that machine learning models neither amplify nor eliminate bias where fairness criteria focus on overall accuracy and error rates for subgroups defined by race, gender, and other protect class characteristics.
This study focuses on deployment of AI around the world to inform emerging market norms and policy and to focus research on critical issues. It finds that in the five years since a previous report, the field of AI has made substantial progress in affecting real-world decision making and everyday life. These advances have enhanced focus on the legal, regulatory, and ethical challenges of responsibly deploying AI systems. The report calls for “sustained investment of time and resources” from government institutions to prepare for and foster “an equitable AI-infused world.”
This report surveys widely discussed mechanisms for promoting fairness, accountability, and transparency (FAT) of algorithmic systems and assesses options available to governments, internet platforms, and other stakeholders for promoting these characteristics. The authors call for greater focus on understanding how various mechanisms can be used in concert and for developing comprehensive FAT policies and standards.
This article identifies and explores a gap between commonly used statistical measures of fairness and rulings and evidentiary standards of the European Court of Justice. The authors suggest that current standards to bring discrimination claims limit the potential for a standardized system of addressing algorithmic discrimination in the EU because they are too contextual and open to interpretation. Additionally, the authors argue that law provides little guidance on addressing cases when algorithms, not humans, are the discriminators. The authors propose conditional demographic disparity as an appropriate statistical measure of fairness to harmonize legal and industry perspectives.
This article reports on an investigation by a team of journalists that found that applicants of color were significantly more likely than White applicants to be denied home loans based on nationwide 2019 data. In an analysis of more than 2 million loan applications, this disparity ranged from 40% more likely for Latino applicants to 80% more likely for Black applicants, despite comparable metrics and credit scores. Given these findings, the article questions the use of traditional credit scoring models and automated underwriting systems.
This article focuses on the role of open-source software (OSS) in the adoption and use of AI and machine learning and argues that this critical infrastructure is not subject to adequate oversight. The significance of OSS is clear: it helps speed AI adoption, reduce AI bias through means such as open-source explainable AI tools, and can improve tech sector competitiveness. However, OSS tools also carry risks, namely reducing competitiveness and giving a small number of technology companies an outsized role in determining AI standards. This paper contends that increasing oversight of OSS tools is a critical step in emerging efforts to define fair and responsible use of AI.
This paper surveys existing and emerging frameworks for AI governance and points to common emphases with respect to reliability, transparency, accountability, and fairness. The authors argue that use of AI has intensified concerns about fairness and call for development of more specific and comprehensive standards by national, regional, and global standard-setting bodies to define fairness and to clarify the role of human intervention in the development and use of AI models in the financial system.
This report incorporates analysis of existing literature and interviews with experts and various stakeholders to determine how automated systems can best support and include traditionally marginalized populations. The report focuses on the problem of algorithmic bias embedded in data and systems. The report proposes a “Digital Bill of Rights” that articulates seven core rights designed to ensure that systems meet expectations for fairness, accountability, and transparency.
This publication considers common types of biases in AI systems that can lead to public distrust in applications across all sectors of the economy and proposes a three-stage framework for reducing such biases. The National Institute of Standards and Technology intentionally focuses on how AI systems are designed, developed, and used and the societal context in which these systems operate rather than specific solutions for bias. As a result, its framework proposes to enable users of AI systems to identify and mitigate bias more effectively through engagement across diverse disciplines and stakeholders, including those most directly affected by biased models. This proposal represents a step by NIST towards the development of standards for trustworthy and responsible AI. NIST is accepting comments on this framework until August 5, 2021.