Companies have been developing and using artificial intelligence (AI) for decades. But we've seen exponential growth since OpenAI released their version of a large language model (LLM), ChatGPT, in 2022. Open-source versions of these tools can help agencies optimize their processes and surpass current levels of data analysis, all in a secure environment that won’t risk exposing sensitive information.
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.
To help policy makers, regulators, legislators and others characterize AI systems deployed in specific contexts, the OECD has developed a user-friendly tool to evaluate AI systems from a policy perspective.
Organisation for Economic Co-operation and Development (OECD)
Artificial intelligence promises exciting new opportunities for the government to make policy, deliver services and engage with residents. But government procurement practices need to adapt if we are to ensure that rapidly-evolving AI tools meet intended purposes, avoid bias, and minimize risks to people, organizations, and communities. This report lays out five distinct challenges related to procuring AI in government.
This plan promotes responsible AI use in public benefits administration by state, local, tribal, and territorial governments, aiming to enhance program effectiveness and efficiency while meeting recipient needs.
U.S. Department of Health and Human Services (HHS)
This playbook provides federal agencies with guidance on implementing AI in a way that is ethical, transparent, and aligned with public trust principles.
U.S. Department of Health and Human Services (HHS)
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.
The Center for Democracy and Technology's brief clarifies misconceptions about artificial intelligence (AI) in government services, emphasizing the need for precise definitions, awareness of AI's limitations, recognition of inherent biases, and acknowledgment of the significant resources required for effective implementation.
This study examines the adoption and implementation of AI chatbots in U.S. state governments, identifying key drivers, challenges, and best practices for public sector chatbot deployment.
In this interview, Code for America staff members share how client success, data science, and qualitative research teams work together to consider the responsible deployment of artificial intelligence (AI) in responding to clients who seek assistance with three products.
The State of California government published guidelines for the safe and effective use of Generative Artificial (GenAI) within state agencies, in accordance with Governor Newsom's Executive Order N-12-23 on Generative Artificial Intelligence.
This report investigates how D.C. government agencies use automated decision-making (ADM) systems and highlights their risks to privacy, fairness, and accountability in public services.