The Electronic Privacy Information Center (EPIC) emphasizes the necessity of adopting broad regulatory definitions for automated decision-making systems (ADS) to ensure comprehensive oversight and protection against potential harms.
This academic paper examines predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. Through this examination, the authors explore how predictive optimization can raise concerns that make its use illegitimate and challenge claims about predictive optimization's accuracy, efficiency, and fairness.
Artificial intelligence promises exciting new opportunities for the government to make policy, deliver services and engage with residents. But government procurement practices need to adapt if we are to ensure that rapidly-evolving AI tools meet intended purposes, avoid bias, and minimize risks to people, organizations, and communities. This report lays out five distinct challenges related to procuring AI in government.
This plan promotes responsible AI use in public benefits administration by state, local, tribal, and territorial governments, aiming to enhance program effectiveness and efficiency while meeting recipient needs.
U.S. Department of Health and Human Services (HHS)
This framework outlines USDA’s principles and approach to support States, localities, Tribes, and territories in responsibly using AI in the implementation and administration of USDA’s nutrition benefits and services. This framework is in response to Section 7.2(b)(ii) of Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
This playbook provides federal agencies with guidance on implementing AI in a way that is ethical, transparent, and aligned with public trust principles.
U.S. Department of Health and Human Services (HHS)
This study examines the adoption and implementation of AI chatbots in U.S. state governments, identifying key drivers, challenges, and best practices for public sector chatbot deployment.
In this interview, Code for America staff members share how client success, data science, and qualitative research teams work together to consider the responsible deployment of artificial intelligence (AI) in responding to clients who seek assistance with three products.
This paper introduces a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development lifecycle.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
What exactly are the differences between generative AI, large language models, and foundation models? This post aims to clarify what each of these three terms mean, how they overlap, and how they differ.
Center for Security and Emerging Technology (CSET)
Automated decision systems (ADS) are increasingly used in government decision-making but lack clear definitions, oversight, and accountability mechanisms.