The OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. The principles were adopted in 2019; this webpage provides an overview of the principles and key terms.
Organisation for Economic Co-operation and Development (OECD)
This academic paper examines predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. Through this examination, the authors explore how predictive optimization can raise concerns that make its use illegitimate and challenge claims about predictive optimization's accuracy, efficiency, and fairness.
Artificial intelligence promises exciting new opportunities for the government to make policy, deliver services and engage with residents. But government procurement practices need to adapt if we are to ensure that rapidly-evolving AI tools meet intended purposes, avoid bias, and minimize risks to people, organizations, and communities. This report lays out five distinct challenges related to procuring AI in government.
This plan promotes responsible AI use in public benefits administration by state, local, tribal, and territorial governments, aiming to enhance program effectiveness and efficiency while meeting recipient needs.
U.S. Department of Health and Human Services (HHS)
This framework outlines USDA’s principles and approach to support States, localities, Tribes, and territories in responsibly using AI in the implementation and administration of USDA’s nutrition benefits and services. This framework is in response to Section 7.2(b)(ii) of Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
This playbook provides federal agencies with guidance on implementing AI in a way that is ethical, transparent, and aligned with public trust principles.
U.S. Department of Health and Human Services (HHS)
The Center for Democracy and Technology's brief clarifies misconceptions about artificial intelligence (AI) in government services, emphasizing the need for precise definitions, awareness of AI's limitations, recognition of inherent biases, and acknowledgment of the significant resources required for effective implementation.
This study examines the adoption and implementation of AI chatbots in U.S. state governments, identifying key drivers, challenges, and best practices for public sector chatbot deployment.
In this interview, Code for America staff members share how client success, data science, and qualitative research teams work together to consider the responsible deployment of artificial intelligence (AI) in responding to clients who seek assistance with three products.
A workshop led by Elham Ali on integrating the principles of human-centered design and equity to Artificial Intelligence (AI) design, use, and evaluation.
What exactly are the differences between generative AI, large language models, and foundation models? This post aims to clarify what each of these three terms mean, how they overlap, and how they differ.
Center for Security and Emerging Technology (CSET)
In May 2020, Stanford's HAI hosted a workshop to discuss the performance of facial recognition technologies that included leading computer scientists, legal scholars, and representatives from industry, government, and civil society. The white paper this workshop produced seeks to answer key questions in improving understandings of this rapidly changing space.