This framework provides a structured approach for ensuring responsible and transparent use of AI systems across government, emphasizing governance, data integrity, performance evaluation, and continuous monitoring.
This study examines public attitudes toward balancing equity and efficiency in algorithmic resource allocation, using online advertising for SNAP enrollment as a case study.
A panel of experts discuss the application of civil rights protections to emerging AI technologies, highlighting potential harms, the need for inclusive teams, and the importance of avoiding technology-centric solutions to social problems.
The team introduced an AI assistant for benefits navigators to streamline the process and improve outcomes by quickly assessing client eligibility for benefits programs.
This report explores how AI is currently used, and how it might be used in the future, to support administrative actions that agency staff complete when processing customers’ SNAP cases. In addition to desk and primary research, this brief was informed by input from APHSA’s wide network of state, county, and city members and national partners in the human services and related sectors.
American Public Human Services Association (APHSA)
In this piece, the Digital Benefits Network shares several sources—from journalistic pieces, to reports and academic articles—we’ve found useful and interesting in our reading on automation and artificial intelligence.
This academic paper examines how federal privacy laws restrict data collection needed for assessing racial disparities, creating a tradeoff between protecting individual privacy and enabling algorithmic fairness in government programs.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
This policy brief explores how federal privacy laws like the Privacy Act of 1974 limit demographic data collection, undermining government efforts to conduct equity assessments and address algorithmic bias.
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
This guide, directed at poverty lawyers, explains automated decision-making systems so lawyers and advocates can better identify the source of their clients' problems and advocate on their behalf. Relevant for practitioners, this report covers key questions around automated decision-making systems.