Topic: Mitigating Harm + Bias
-
Automation + AI Digital Welfare States and Human Rights
In this report, the UN Special Rapporteur critically examines uses of digital technologies for administration of welfare programs across international contexts, and makes recommendations for using technology responsibly and ethically.
-
Automation + AI Surveillance, Discretion and Governance in Automated Welfare
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.
-
Automation + AI Popular Support for Balancing Equity and Efficiency in Resource Allocation
This article explores how online advertising algorithms bias between Spanish and English speakers for SNAP in California.
-
Automation + AI I Am Not a Number
In early 2023, Wired magazine ran four pieces exploring the use of algorithms to identify fraud in public benefits and potential harms, deeply exploring cases from Europe.
-
Automation + AI Blueprint for an AI Bill of Rights
The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.
-
Automation + AI ITEM 10: How a Small Legal Aid Team Took on Algorithmic Black Boxing at Their State’s Employment Agency (And Won)
In June 2022, Legal Aid of Arkansas won a significant victory in their ongoing work to compel Arkansas’ employment agency to disclose crucial details about how it uses automated decision-making systems to detect and adjudicate fraud.
-
Automation + AI NIST: AI Risk Management Framework
In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The framework is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
-
Automation + AI Challenging the Use of Algorithm-driven Decision-making in Benefits Determinations Affecting People with Disabilities
This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court.
-
Automation + AI Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy
This academic paper examines predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. Through this examination, the authors explore how predictive optimization can raise concerns that make its use illegitimate and challenge claims about predictive optimization's accuracy, efficiency, and fairness.
-
Automation + AI Screened & Scored in the District of Columbia
This report explores how automated decision-making systems are being used in one jurisdiction: Washington, D.C.