Library
Discover the latest innovations, learn about promising practices, and find out what’s coming next with best-in-class resources from trusted sources.
Is there something missing from our library?

Search for the topic or resource you're looking for, or use the filters to narrow down results below.
-
The Privacy-Bias Tradeoff: Data Minimization and Racial Disparity Assessments in U.S. Government
An emerging concern in algorithmic fairness is the tension with privacy interests. Data minimization can restrict access to protected attributes, such as race and ethnicity, for bias assessment and mitigation. This paper examines how this “privacy-bias tradeoff” has become an important battleground for fairness assessments in the U.S. government and provides rich lessons for resolving these tradeoffs.
-
Regulating Biometrics: Taking Stock of a Rapidly Changing Landscape
This post reflects on and excerpts from AI Now's 2020 report on biometrics regulation.
-
Medicaid Strategies Making a Difference: A Spotlight on Rhode Island
Sharing lessons learned via the Medicaid Churn Learning Collaborative, which is working to reduce Medicaid churn, improve renewal processes for administrators, and protect health insurance coverage for children and families.
-
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.
-
Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
-
Algorithmic Accountability: Moving Beyond Audits
This report explores how despite unresolved concerns, an audit-centered algorithmic accountability approach is being rapidly mainstreamed into voluntary frameworks and regulations.
-
Bias-Free Language
The guidelines for bias-free language contain both general guidelines for writing about people without bias across a range of topics and specific guidelines that address the individual characteristics of age, disability, gender, participation in research, racial and ethnic identity, sexual orientation, socioeconomic status, and intersectionality.
-
Automated Decision-Making Systems and Discrimination
This guidebook offers an introduction to the risks of discrimination when using automated decision-making systems. This report also includes helpful definitions related to automation.
-
POVERTY LAWGORITHMS: A Poverty Lawyer’s Guide to Fighting Automated Decision-Making Harms on Low-Income Communities
This guide, directed at poverty lawyers, explains automated decision-making systems so lawyers and advocates can better identify the source of their clients' problems and advocate on their behalf. Relevant for practitioners, this report covers key questions around automated decision-making systems.
-
Framing the Risk Management Framework: Actionable Instructions by NIST in their “Govern” Section
This post introduces EPIC's exploration of actionable recommendations and points of agreement from leading A.I. frameworks, beginning with the National Institute of Standards and Technology's AI Risk Management Framework.
-
Executive Order No. 614: Establishing the Digital Accessibility and Equity Governance Board
Executive Order in the Commonwealth of Massachusetts establishing the Digital Accessibility and Equity Governance Board
-
Digital Welfare States and Human Rights
This UN report warns against the risks of digital welfare systems, emphasizing their potential to undermine human rights through increased surveillance, automation, and privatization of public services.