A report that defines what effective “human oversight” of AI looks like in public benefits delivery and offers practical guidance for ensuring accountability, equity, and trust in algorithmic systems.
This report examines how governments can effectively build, attract, and retain AI talent to responsibly integrate artificial intelligence into public service delivery.
These principles and best practices for AI developers and employers to center the well-being of workers in the development and deployment of AI in the workplace and to value workers as the essential resources they are.
A workshop led by Elham Ali on integrating the principles of human-centered design and equity to Artificial Intelligence (AI) design, use, and evaluation.
This report presents evidence on the use of algorithmic accountability policies in different contexts from the perspective of those implementing these tools, and explores the limits of legal and policy mechanisms in ensuring safe and accountable algorithmic systems.
This report explores technologies that have the potential to significantly affect employment and job quality in the public sector, the factors that drive choices about which technologies are adopted and how they are implemented, how technology will change the experience of public sector work, and what kinds of interventions can protect against potential downsides of technology use in the public sector. The report categories technologies into five overlapping categories including manual task automation, process automation, automated decision-making systems, integrated data systems, and electronic monitoring.
This academic paper examines how federal privacy laws restrict data collection needed for assessing racial disparities, creating a tradeoff between protecting individual privacy and enabling algorithmic fairness in government programs.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
This policy brief explores how federal privacy laws like the Privacy Act of 1974 limit demographic data collection, undermining government efforts to conduct equity assessments and address algorithmic bias.
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
The primer–originally prepared for the Progressive Congressional Caucus’ Tech Algorithm Briefing–explores the trade-offs and debates about algorithms and accountability across several key ethical dimensions, including fairness and bias; opacity and transparency; and lack of standards for auditing.