In early 2023, Wired magazine ran four pieces exploring the use of algorithms to identify fraud in public benefits and potential harms, deeply exploring cases from Europe.
A unified taxonomy and tooling suite that consolidates AI risks across frameworks and links them to datasets, benchmarks, and mitigation strategies to support practical AI governance.
This report by EPIC investigates how automated decision-making (ADM) systems are used across Washington, D.C.’s public services and the resulting impacts on equity, privacy, and access to benefits.
This toolkit provides guidance to protect participant confidentiality in human services research and evaluation, including legal frameworks, risk assessment strategies, and best practices.
U.S. Department of Health and Human Services (HHS)
Guidance outlining how Australian government agencies can train staff on artificial intelligence, covering key concepts, responsible use, and alignment with national AI ethics and policy frameworks.
This report explores key questions that a focus on disability raises for the project of understanding the social implications of AI, and for ensuring that AI technologies don’t reproduce and extend histories of marginalization.
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.
This report examines how governments use AI systems to allocate public resources and provides recommendations to ensure these tools promote equity, transparency, and fairness.
This report offers a detailed assessment of how AI and emerging technologies could impact the Social Security Administration’s disability benefits determinations, recommending guardrails and principles to protect applicant rights, mitigate bias, and promote fairness.