Service Delivery Area: Benefits
-
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.
-
Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
-
Algorithmic Accountability: Moving Beyond Audits
This report explores how despite unresolved concerns, an audit-centered algorithmic accountability approach is being rapidly mainstreamed into voluntary frameworks and regulations.
-
Bias-Free Language
The guidelines for bias-free language contain both general guidelines for writing about people without bias across a range of topics and specific guidelines that address the individual characteristics of age, disability, gender, participation in research, racial and ethnic identity, sexual orientation, socioeconomic status, and intersectionality.
-
Automated Decision-Making Systems and Discrimination
This guidebook offers an introduction to the risks of discrimination when using automated decision-making systems. This report also includes helpful definitions related to automation.
-
POVERTY LAWGORITHMS: A Poverty Lawyer’s Guide to Fighting Automated Decision-Making Harms on Low-Income Communities
This guide, directed at poverty lawyers, explains automated decision-making systems so lawyers and advocates can better identify the source of their clients' problems and advocate on their behalf. Relevant for practitioners, this report covers key questions around automated decision-making systems.
-
Framing the Risk Management Framework: Actionable Instructions by NIST in their “Govern” Section
This post introduces EPIC's exploration of actionable recommendations and points of agreement from leading A.I. frameworks, beginning with the National Institute of Standards and Technology's AI Risk Management Framework.
-
Digital Welfare States and Human Rights
This UN report warns against the risks of digital welfare systems, emphasizing their potential to undermine human rights through increased surveillance, automation, and privatization of public services.
-
Assembling Accountability: Algorithmic Impact Assessment for the Public Interest
This project maps the challenges of constructing algorithmic impact assessments (AIAs) by analyzing impact assessments in other domains—from the environment to human rights to privacy and identifies ten needed components for a robust impact assessment.
-
Access Denied: Faulty Automated Background Checks Freeze Out Renters
This reporting explores how algorithms used to screen prospective tenants, including those waiting for public housing, can block renters from housing based on faulty information.
-
A Safety Net with 100 Percent Participation: How Much Would Benefits Increase and Poverty Decline?
This analysis explores the potential reduction in poverty rates across all U.S. states if every eligible individual received full benefits from seven key safety net programs, highlighting significant decreases in overall and child poverty.
-
Surveillance, Discretion and Governance in Automated Welfare
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.