A panel of experts discuss the application of civil rights protections to emerging AI technologies, highlighting potential harms, the need for inclusive teams, and the importance of avoiding technology-centric solutions to social problems.
This report investigates how D.C. government agencies use automated decision-making (ADM) systems and highlights their risks to privacy, fairness, and accountability in public services.
This article explores how AI and Rules as Code are turning law into automated systems, including how governance focused on transparency, explainability, and risk management can ensure these digital legal frameworks stay reliable and fair.
This study examines public attitudes toward balancing equity and efficiency in algorithmic resource allocation, using online advertising for SNAP enrollment as a case study.
This reporting explores how algorithms used to screen prospective tenants, including those waiting for public housing, can block renters from housing based on faulty information.
This report explores the role that academic and corporate Research Ethics Committees play in evaluating AI and data science research for ethical issues, and also investigates the kinds of common challenges these bodies face.
A policy brief outlining concrete actions states can take to regulate tenant screening practices and reduce harm from inaccurate reports, automated scoring, and discriminatory impacts in the rental housing market.
This toolkit provides guidance to protect participant confidentiality in human services research and evaluation, including legal frameworks, risk assessment strategies, and best practices.
U.S. Department of Health and Human Services (HHS)
The Digital Benefit Network's Digital Identity Community of Practice held a session to hear considerations from civil rights technologists and human-centered design practitioners on ways to ensure program security while simultaneously promoting equity, enabling accessibility, and minimizing bias.
Guidance outlining how Australian government agencies can train staff on artificial intelligence, covering key concepts, responsible use, and alignment with national AI ethics and policy frameworks.
This guidebook offers an introduction to the risks of discrimination when using automated decision-making systems. This report also includes helpful definitions related to automation.
The Electronic Privacy Information Center (EPIC) emphasizes the necessity of adopting broad regulatory definitions for automated decision-making systems (ADS) to ensure comprehensive oversight and protection against potential harms.