This policy brief explores how federal privacy laws like the Privacy Act of 1974 limit demographic data collection, undermining government efforts to conduct equity assessments and address algorithmic bias.
In early 2023, Wired magazine ran four pieces exploring the use of algorithms to identify fraud in public benefits and potential harms, deeply exploring cases from Europe.
This article explores how AI and Rules as Code are turning law into automated systems, including how governance focused on transparency, explainability, and risk management can ensure these digital legal frameworks stay reliable and fair.
The AI RMF Playbook offers organizations detailed, voluntary guidance for implementing the NIST AI Risk Management Framework to map, measure, manage, and govern AI risks effectively.
National Institute of Standards and Technology (NIST)
This is a searchable tool that compiles and categorizes over 4,700 policy recommendations submitted in response to the U.S. government's 2025 Request for Information on artificial intelligence policy.
The report examines how AI deployment across state and local public administration such as chatbots, voice transcription, content summarization, and eligibility automation are reshaping government work.
This report examines how governments use AI systems to allocate public resources and provides recommendations to ensure these tools promote equity, transparency, and fairness.
A unified taxonomy and tooling suite that consolidates AI risks across frameworks and links them to datasets, benchmarks, and mitigation strategies to support practical AI governance.
The Digital Benefit Network's Digital Identity Community of Practice held a session to hear considerations from civil rights technologists and human-centered design practitioners on ways to ensure program security while simultaneously promoting equity, enabling accessibility, and minimizing bias.