In May 2020, Stanford's HAI hosted a workshop to discuss the performance of facial recognition technologies that included leading computer scientists, legal scholars, and representatives from industry, government, and civil society. The white paper this workshop produced seeks to answer key questions in improving understandings of this rapidly changing space.
In this policy brief and video, Michele Gilman summarizes evidence-based recommendations for better structuring public participation processes for AI, and underscores the urgency of enacting them.
This report explores how AI is currently used, and how it might be used in the future, to support administrative actions that agency staff complete when processing customers’ SNAP cases. In addition to desk and primary research, this brief was informed by input from APHSA’s wide network of state, county, and city members and national partners in the human services and related sectors.
American Public Human Services Association (APHSA)
This report explores technologies that have the potential to significantly affect employment and job quality in the public sector, the factors that drive choices about which technologies are adopted and how they are implemented, how technology will change the experience of public sector work, and what kinds of interventions can protect against potential downsides of technology use in the public sector. The report categories technologies into five overlapping categories including manual task automation, process automation, automated decision-making systems, integrated data systems, and electronic monitoring.
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
National Institute of Standards and Technology (NIST)
This academic paper examines how federal privacy laws restrict data collection needed for assessing racial disparities, creating a tradeoff between protecting individual privacy and enabling algorithmic fairness in government programs.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
This policy brief explores how federal privacy laws like the Privacy Act of 1974 limit demographic data collection, undermining government efforts to conduct equity assessments and address algorithmic bias.
This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court.
This report offers a critical framework for designing algorithmic impact assessments (AIAs) by drawing lessons from existing impact assessments in areas like environment, privacy, and human rights to ensure accountability and reduce algorithmic harms.
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
On December 5, 2022, an expert panel, including representatives from the White House, unpacked what’s included in the AI Bill of Rights, and explored how to operationalize such guidance among consumers, developers, and other users designing and implementing automated decisions.