This academic paper examines how federal privacy laws restrict data collection needed for assessing racial disparities, creating a tradeoff between protecting individual privacy and enabling algorithmic fairness in government programs.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
Sharing lessons learned via the Medicaid Churn Learning Collaborative, which is working to reduce Medicaid churn, improve renewal processes for administrators, and protect health insurance coverage for children and families.
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
This report explores how despite unresolved concerns, an audit-centered algorithmic accountability approach is being rapidly mainstreamed into voluntary frameworks and regulations.
The guidelines for bias-free language contain both general guidelines for writing about people without bias across a range of topics and specific guidelines that address the individual characteristics of age, disability, gender, participation in research, racial and ethnic identity, sexual orientation, socioeconomic status, and intersectionality.
This guidebook offers an introduction to the risks of discrimination when using automated decision-making systems. This report also includes helpful definitions related to automation.
This guide, directed at poverty lawyers, explains automated decision-making systems so lawyers and advocates can better identify the source of their clients' problems and advocate on their behalf. Relevant for practitioners, this report covers key questions around automated decision-making systems.
This post introduces EPIC's exploration of actionable recommendations and points of agreement from leading A.I. frameworks, beginning with the National Institute of Standards and Technology's AI Risk Management Framework.