This report presents evidence on the use of algorithmic accountability policies in different contexts from the perspective of those implementing these tools, and explores the limits of legal and policy mechanisms in ensuring safe and accountable algorithmic systems.
Companies have been developing and using artificial intelligence (AI) for decades. But we've seen exponential growth since OpenAI released their version of a large language model (LLM), ChatGPT, in 2022. Open-source versions of these tools can help agencies optimize their processes and surpass current levels of data analysis, all in a secure environment that won’t risk exposing sensitive information.
This study evaluates the use of RPA technology by three states to automate SNAP administration, focusing on repetitive tasks previously performed manually.
This essay explains why the Center on Privacy & Technology has chosen to stop using terms like "artificial intelligence," "AI," and "machine learning," arguing that such language obscures human accountability and overstates the capabilities of these technologies.
Hear perspectives on topics including centering beneficiaries and workers in new ways, digital service delivery, digital identity, and automation.This video was recorded at the Digital Benefits Conference (BenCon) on June 14, 2023.
Little is known about how agencies are currently using AI systems, and little attention has been devoted to how agencies acquire such tools or oversee their use.
This policy brief explores how federal privacy laws like the Privacy Act of 1974 limit demographic data collection, undermining government efforts to conduct equity assessments and address algorithmic bias.
This report provides an overview of artificial intelligence (AI), key policy considerations, and federal government activities related to AI development and regulation.
The state of Indiana developed a policy framework for the ethical and efficient use of artificial intelligence (AI) within state agencies. The policy adopts the National Institute of Standards and Technology’s AI Risk Management Framework to manage potential risks effectively. It also details the applicability of the actions undertaken by the Office of the Chief Data Officer (OCDO) to enable the deployment of trustworthy AI systems.