A panel of experts discuss the application of civil rights protections to emerging AI technologies, highlighting potential harms, the need for inclusive teams, and the importance of avoiding technology-centric solutions to social problems.
This report investigates how D.C. government agencies use automated decision-making (ADM) systems and highlights their risks to privacy, fairness, and accountability in public services.
An in-depth report that examines how states use automated eligibility algorithms for home and community-based services (HCBS) under Medicaid and assesses their implications for access and fairness.
Errors in administrative processes are costly and burdensome for clients but are understudied. Using U.S. Unemployment Insurance data, this study finds that while automation improves accuracy in simpler programs, it can increase errors in more complex ones.
A report from the State of California presenting an initial analysis of where generative AI (GenAI) may improve access of essential goods and services.
A workshop led by Elham Ali on integrating the principles of human-centered design and equity to Artificial Intelligence (AI) design, use, and evaluation.
The team developed an application to simplify Medicaid and CHIP applications through LLM APIs while addressing limitations such as hallucinations and outdated information by implementing a selective input process for clean and current data.
Louisiana issued an RFI to identify solutions that can provide a technology platform for determining eligibility and managing cases across multiple human services programs.
This guidebook offers an introduction to the risks of discrimination when using automated decision-making systems. This report also includes helpful definitions related to automation.
In early 2023, Wired magazine ran four pieces exploring the use of algorithms to identify fraud in public benefits and potential harms, deeply exploring cases from Europe.
Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.
Center for Security and Emerging Technology (CSET)
This policy brief explores how federal privacy laws like the Privacy Act of 1974 limit demographic data collection, undermining government efforts to conduct equity assessments and address algorithmic bias.