This report by EPIC investigates how automated decision-making (ADM) systems are used across Washington, D.C.’s public services and the resulting impacts on equity, privacy, and access to benefits.
The AI RMF Playbook offers organizations detailed, voluntary guidance for implementing the NIST AI Risk Management Framework to map, measure, manage, and govern AI risks effectively.
National Institute of Standards and Technology (NIST)
This is a searchable tool that compiles and categorizes over 4,700 policy recommendations submitted in response to the U.S. government's 2025 Request for Information on artificial intelligence policy.
This strategy document establishes a governance framework and roadmap to ensure responsible, trustworthy, and effective AI use across Canadian federal institutions.
Sarah Bargal provides an overview of AI, machine learning, and deep learning, illustrating their potential for both positive and negative applications, including authentication, adversarial attacks, deepfakes, generative models, personalization, and ethical concerns.
The Digital Benefit Network's Digital Identity Community of Practice held a session to hear considerations from civil rights technologists and human-centered design practitioners on ways to ensure program security while simultaneously promoting equity, enabling accessibility, and minimizing bias.
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.
This academic paper examines predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. Through this examination, the authors explore how predictive optimization can raise concerns that make its use illegitimate and challenge claims about predictive optimization's accuracy, efficiency, and fairness.
This article explores how AI and Rules as Code are turning law into automated systems, including how governance focused on transparency, explainability, and risk management can ensure these digital legal frameworks stay reliable and fair.
The article discusses the phenomenon of model multiplicity in machine learning, arguing that developers should be legally obligated to search for less discriminatory algorithms (LDAs) to reduce disparities in algorithmic decision-making.