Topic: Automation + AI
-
Automation + AI Using open-source LLMs to optimize government data
Companies have been developing and using artificial intelligence (AI) for decades. But we've seen exponential growth since OpenAI released their version of a large language model (LLM), ChatGPT, in 2022. Open-source versions of these tools can help agencies optimize their processes and surpass current levels of data analysis, all in a secure environment that won’t risk exposing sensitive information.
-
Automation + AI What the Digital Benefits Network is Reading on Automation
In this piece, the Digital Benefits Network shares several sources—from journalistic pieces, to reports and academic articles—we’ve found useful and interesting in our reading on automation and artificial intelligence.
-
Automation + AI The Equitable Tech Horizon in Digital Benefits Panel
Hear perspectives on topics including centering beneficiaries and workers in new ways, digital service delivery, digital identity, and automation.This video was recorded at the Digital Benefits Conference (BenCon) on June 14, 2023.
-
Automation + AI Evaluating Facial Recognition Technology: A Protocol for Performance Assessment in New Domains
In May 2020, Stanford's HAI hosted a workshop to discuss the performance of facial recognition technologies that included leading computer scientists, legal scholars, and representatives from industry, government, and civil society. The white paper this workshop produced seeks to answer key questions in improving understandings of this rapidly changing space.
-
Automation + AI Domain Shift and Emerging Questions in Facial Recognition Technology
This policy brief offers recommendations to policymakers relating to the computational and human sides of facial recognition technologies based on a May 2020 workshop with leading computer scientists, legal scholars, and representatives from industry, government, and civil society
-
Automation + AI A Human Rights-Based Approach to Responsible AI
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
-
Automation + AI Who Audits the Auditors? Recommendations from a Field Scan of the Algorithmic Auditing Ecosystem
Through a field scan, this paper identifies emerging best practices as well as methods and tools that are becoming commonplace, and enumerates common barriers to leveraging algorithmic audits as effective accountability mechanisms.
-
Automation + AI The Social Life of Algorithmic Harms
This series of essays seeks to expand our vocabulary of algorithmic harms to help protect against them.
-
Automation + AI The Privacy-Bias Tradeoff: Data Minimization and Racial Disparity Assessments in U.S. Government
An emerging concern in algorithmic fairness is the tension with privacy interests. Data minimization can restrict access to protected attributes, such as race and ethnicity, for bias assessment and mitigation. This paper examines how this “privacy-bias tradeoff” has become an important battleground for fairness assessments in the U.S. government and provides rich lessons for resolving these tradeoffs.
-
Digital Identity Regulating Biometrics: Taking Stock of a Rapidly Changing Landscape
This post reflects on and excerpts from AI Now's 2020 report on biometrics regulation.
-
Automation + AI Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.
-
Automation + AI Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.