In May 2020, Stanford's HAI hosted a workshop to discuss the performance of facial recognition technologies that included leading computer scientists, legal scholars, and representatives from industry, government, and civil society. The white paper this workshop produced seeks to answer key questions in improving understandings of this rapidly changing space.
This policy brief offers recommendations to policymakers relating to the computational and human sides of facial recognition technologies based on a May 2020 workshop with leading computer scientists, legal scholars, and representatives from industry, government, and civil society
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
Through a field scan, this paper identifies emerging best practices as well as methods and tools that are becoming commonplace, and enumerates common barriers to leveraging algorithmic audits as effective accountability mechanisms.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
This academic paper examines how federal privacy laws restrict data collection needed for assessing racial disparities, creating a tradeoff between protecting individual privacy and enabling algorithmic fairness in government programs.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
Sharing lessons learned via the Medicaid Churn Learning Collaborative, which is working to reduce Medicaid churn, improve renewal processes for administrators, and protect health insurance coverage for children and families.
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
This report explores how despite unresolved concerns, an audit-centered algorithmic accountability approach is being rapidly mainstreamed into voluntary frameworks and regulations.
The guidelines for bias-free language contain both general guidelines for writing about people without bias across a range of topics and specific guidelines that address the individual characteristics of age, disability, gender, participation in research, racial and ethnic identity, sexual orientation, socioeconomic status, and intersectionality.