Topic: Automation + AI
-
Democratizing AI: Principles for Meaningful Public Participation
In this policy brief and video, Michele Gilman summarizes evidence-based recommendations for better structuring public participation processes for AI, and underscores the urgency of enacting them.
-
Automated Decision-Making Systems and Discrimination
This guidebook offers an introduction to the risks of discrimination when using automated decision-making systems. This report also includes helpful definitions related to automation.
-
Government By Algorithm: Artificial Intelligence In Federal Administrative Agencies
Little is known about how agencies are currently using AI systems, and little attention has been devoted to how agencies acquire such tools or oversee their use.
-
Evaluating Facial Recognition Technology: A Protocol for Performance Assessment in New Domains
In May 2020, Stanford's HAI hosted a workshop to discuss the performance of facial recognition technologies that included leading computer scientists, legal scholars, and representatives from industry, government, and civil society. The white paper this workshop produced seeks to answer key questions in improving understandings of this rapidly changing space.
-
Defining and Demystifying Automated Decision Systems
Automated decision systems (ADS) are increasingly used in government decision-making but lack clear definitions, oversight, and accountability mechanisms.
-
ITEM 10: How a Small Legal Aid Team Took on Algorithmic Black Boxing at Their State’s Employment Agency (And Won)
This report investigates how D.C. government agencies use automated decision-making (ADM) systems and highlights their risks to privacy, fairness, and accountability in public services.
-
Using Artificial Intelligence to De-Jargon Government Language
A training course on using artificial intelligence (AI) tools to de-jargonize government language, with a tutorial on turning a complex piece of government writing into simpler and easier-to-understand language for government employees and residents alike.
-
Framing the Risk Management Framework: Actionable Instructions by NIST in their “Govern” Section
This post introduces EPIC's exploration of actionable recommendations and points of agreement from leading A.I. frameworks, beginning with the National Institute of Standards and Technology's AI Risk Management Framework.
-
Human-Centered, Machine-Assisted: Ethically Deploying AI to Improve the Client Experience
In this interview, Code for America staff members share how client success, data science, and qualitative research teams work together to consider the responsible deployment of artificial intelligence (AI) in responding to clients who seek assistance with three products.
-
Dispelling Myths About Artificial Intelligence for Government Service Delivery
The Center for Democracy and Technology's brief clarifies misconceptions about artificial intelligence (AI) in government services, emphasizing the need for precise definitions, awareness of AI's limitations, recognition of inherent biases, and acknowledgment of the significant resources required for effective implementation.
-
Surveillance, Discretion and Governance in Automated Welfare
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.
-
Controlling Large Language Models: A Primer
Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.