This report explores how AI is currently used, and how it might be used in the future, to support administrative actions that agency staff complete when processing customers’ SNAP cases. In addition to desk and primary research, this brief was informed by input from APHSA’s wide network of state, county, and city members and national partners in the human services and related sectors.
American Public Human Services Association (APHSA)
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
National Institute of Standards and Technology (NIST)
This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court.
The primer–originally prepared for the Progressive Congressional Caucus’ Tech Algorithm Briefing–explores the trade-offs and debates about algorithms and accountability across several key ethical dimensions, including fairness and bias; opacity and transparency; and lack of standards for auditing.
In accordance with Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, Federal agencies began publishing their first annual inventories of artificial intelligence (AI) use cases in June 2022.
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
The Guide to Robotic Process Automation, including the RPA Playbook provides detailed guidance for federal agencies starting a new RPA program or evolving an existing one.
The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.
This post argues that for the types of large-scale, organized fraud attacks that many state benefits systems saw during the pandemic, solutions grounded in cybersecurity methods may be far more effective than creating or adopting automated systems.