Topic: Mitigating Harm + Bias
-
Less Discriminatory Algorithms
The article discusses the phenomenon of model multiplicity in machine learning, arguing that developers should be legally obligated to search for less discriminatory algorithms (LDAs) to reduce disparities in algorithmic decision-making.
-
Prioritizing Access and Safety Q&A on Service Design in Digital Identity
The Digital Benefit Network's Digital Identity Community of Practice held a session to hear considerations from civil rights technologists and human-centered design practitioners on ways to ensure program security while simultaneously promoting equity, enabling accessibility, and minimizing bias.
-
Enabling Principles for AI Governance
A report from the Center for Security and Emerging Technology (CSET) discussing enabling principles for artificial intelligence (AI) governance.
-
Task Force on Artificial Intelligence, Emerging Technology, and Disability Benefits: Phase One Report
This report offers a detailed assessment of how AI and emerging technologies could impact the Social Security Administration’s disability benefits determinations, recommending guardrails and principles to protect applicant rights, mitigate bias, and promote fairness.
-
Surveillance, Discretion and Governance in Automated Welfare
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.
-
InnovateUS AI Workshop Archive
A comprehensive series of workshops and courses designed to equip public sector professionals with the knowledge and skills to responsibly integrate AI technologies into government operations.​
-
Access Denied: Faulty Automated Background Checks Freeze Out Renters
This reporting explores how algorithms used to screen prospective tenants, including those waiting for public housing, can block renters from housing based on faulty information.
-
The Privacy-Bias Tradeoff: Data Minimization and Racial Disparity Assessments in U.S. Government
This academic paper examines how federal privacy laws restrict data collection needed for assessing racial disparities, creating a tradeoff between protecting individual privacy and enabling algorithmic fairness in government programs.
-
Digital Welfare States and Human Rights
This UN report warns against the risks of digital welfare systems, emphasizing their potential to undermine human rights through increased surveillance, automation, and privatization of public services.
-
NIST AI Risk Management Framework (RMF 1.0)
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
-
Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy
This academic paper examines predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. Through this examination, the authors explore how predictive optimization can raise concerns that make its use illegitimate and challenge claims about predictive optimization's accuracy, efficiency, and fairness.
-
Guidance for Inclusive AI
This resource helps individuals with aligning their work with the needs of the communities they wish to serve, while reducing the likelihood of harms and risks those communities may face due to the development and deployment of AI technologies.