The article discusses the phenomenon of model multiplicity in machine learning, arguing that developers should be legally obligated to search for less discriminatory algorithms (LDAs) to reduce disparities in algorithmic decision-making.
This action plan outlines Oregon’s strategic approach to adopting AI in state government, emphasizing ethical use, privacy, transparency, and workforce readiness.
The Digital Benefit Network's Digital Identity Community of Practice held a session to hear considerations from civil rights technologists and human-centered design practitioners on ways to ensure program security while simultaneously promoting equity, enabling accessibility, and minimizing bias.
This report offers a detailed assessment of how AI and emerging technologies could impact the Social Security Administration’s disability benefits determinations, recommending guardrails and principles to protect applicant rights, mitigate bias, and promote fairness.
The AI RMF Playbook offers organizations detailed, voluntary guidance for implementing the NIST AI Risk Management Framework to map, measure, manage, and govern AI risks effectively.
National Institute of Standards and Technology (NIST)
This is a searchable tool that compiles and categorizes over 4,700 policy recommendations submitted in response to the U.S. government's 2025 Request for Information on artificial intelligence policy.
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
National Institute of Standards and Technology (NIST)
This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court.
This report by EPIC investigates how automated decision-making (ADM) systems are used across Washington, D.C.’s public services and the resulting impacts on equity, privacy, and access to benefits.
This paper explores how legacy procurement processes in U.S. cities shape the acquisition and governance of AI tools, based on interviews with local government employees.
The report examines how AI deployment across state and local public administration such as chatbots, voice transcription, content summarization, and eligibility automation are reshaping government work.
A unified taxonomy and tooling suite that consolidates AI risks across frameworks and links them to datasets, benchmarks, and mitigation strategies to support practical AI governance.