This policy brief offers recommendations to policymakers relating to the computational and human sides of facial recognition technologies based on a May 2020 workshop with leading computer scientists, legal scholars, and representatives from industry, government, and civil society
This academic paper examines how federal privacy laws restrict data collection needed for assessing racial disparities, creating a tradeoff between protecting individual privacy and enabling algorithmic fairness in government programs.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
In May 2020, Stanford's HAI hosted a workshop to discuss the performance of facial recognition technologies that included leading computer scientists, legal scholars, and representatives from industry, government, and civil society. The white paper this workshop produced seeks to answer key questions in improving understandings of this rapidly changing space.
This policy brief explores how federal privacy laws like the Privacy Act of 1974 limit demographic data collection, undermining government efforts to conduct equity assessments and address algorithmic bias.
Little is known about how agencies are currently using AI systems, and little attention has been devoted to how agencies acquire such tools or oversee their use.