Topic: Mitigating Harm + Bias
-
Automation + AI The Social Life of Algorithmic Harms
This series of essays seeks to expand our vocabulary of algorithmic harms to help protect against them.
-
Automation + AI Automated Decision-Making Systems and Discrimination
This guidebook offers an introduction to the risks of discrimination when using automated decision-making systems. This report also includes helpful definitions related to automation.
-
Automation + AI Digital Welfare States and Human Rights
In this report, the UN Special Rapporteur critically examines uses of digital technologies for administration of welfare programs across international contexts, and makes recommendations for using technology responsibly and ethically.
-
Automation + AI POVERTY LAWGORITHMS: A Poverty Lawyer’s Guide to Fighting Automated Decision-Making Harms on Low-Income Communities
This guide, directed at poverty lawyers, explains automated decision-making systems so lawyers and advocates can better identify the source of their clients' problems and advocate on their behalf. Relevant for practitioners, this report covers key questions around automated decision-making systems.
-
Automation + AI Access Denied: Faulty Automated Background Checks Freeze Out Renters
This reporting explores how algorithms used to screen prospective tenants, including those waiting for public housing, can block renters from housing based on faulty information.
-
Automation + AI Surveillance, Discretion and Governance in Automated Welfare
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.
-
Automation + AI Popular Support for Balancing Equity and Efficiency in Resource Allocation
This article explores how online advertising algorithms bias between Spanish and English speakers for SNAP in California.
-
Automation + AI I Am Not a Number
In early 2023, Wired magazine ran four pieces exploring the use of algorithms to identify fraud in public benefits and potential harms, deeply exploring cases from Europe.
-
Automation + AI NIST: AI Risk Management Framework
In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The framework is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
-
Automation + AI Challenging the Use of Algorithm-driven Decision-making in Benefits Determinations Affecting People with Disabilities
This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court.
-
Automation + AI Blueprint for an AI Bill of Rights
The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.
-
Automation + AI ITEM 10: How a Small Legal Aid Team Took on Algorithmic Black Boxing at Their State’s Employment Agency (And Won)
In June 2022, Legal Aid of Arkansas won a significant victory in their ongoing work to compel Arkansas’ employment agency to disclose crucial details about how it uses automated decision-making systems to detect and adjudicate fraud.