Author: Daniel E. Ho
-
Automation + AI The Privacy-Bias Trade-Off
Safeguarding privacy and addressing algorithmic bias can pose an under-recognized trade-off. This brief documents tradeoffs by examining the U.S. government’s recent efforts to introduce government-wide equity assessments of federal programs. The authors propose a range of policy solutions that would enable agencies to navigate the privacy-bias trade-off.
-
Automation + AI Evaluating Facial Recognition Technology: A Protocol for Performance Assessment in New Domains
In May 2020, Stanford's HAI hosted a workshop to discuss the performance of facial recognition technologies that included leading computer scientists, legal scholars, and representatives from industry, government, and civil society. The white paper this workshop produced seeks to answer key questions in improving understandings of this rapidly changing space.
-
Automation + AI Domain Shift and Emerging Questions in Facial Recognition Technology
This policy brief offers recommendations to policymakers relating to the computational and human sides of facial recognition technologies based on a May 2020 workshop with leading computer scientists, legal scholars, and representatives from industry, government, and civil society
-
Automation + AI The Privacy-Bias Tradeoff: Data Minimization and Racial Disparity Assessments in U.S. Government
An emerging concern in algorithmic fairness is the tension with privacy interests. Data minimization can restrict access to protected attributes, such as race and ethnicity, for bias assessment and mitigation. This paper examines how this “privacy-bias tradeoff” has become an important battleground for fairness assessments in the U.S. government and provides rich lessons for resolving these tradeoffs.
-
Automation + AI Government By Algorithm: Artificial Intelligence In Federal Administrative Agencies
Little is known about how agencies are currently using AI systems, and little attention has been devoted to how agencies acquire such tools or oversee their use.