Author: Ranjit Singh
-
Red-Teaming in the Public Interest
This report explores how red-teaming practices can be adapted for generative AI in ways that serve the public interest.
-
The Social Life of Algorithmic Harms
This series of essays seeks to expand our vocabulary of algorithmic harms to help protect against them.
-
Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
-
Assembling Accountability: Algorithmic Impact Assessment for the Public Interest
This report offers a critical framework for designing algorithmic impact assessments (AIAs) by drawing lessons from existing impact assessments in areas like environment, privacy, and human rights to ensure accountability and reduce algorithmic harms.