This essay explains why the Center on Privacy & Technology has chosen to stop using terms like "artificial intelligence," "AI," and "machine learning," arguing that such language obscures human accountability and overstates the capabilities of these technologies.
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.
Research identified five key obstacles that researchers, activists, and advocates face in efforts to open critical public conversations about AI’s relationship with inequity and advance needed policies.
Hear perspectives on topics including centering beneficiaries and workers in new ways, digital service delivery, digital identity, and automation.This video was recorded at the Digital Benefits Conference (BenCon) on June 14, 2023.
This plan promotes responsible AI use in public benefits administration by state, local, tribal, and territorial governments, aiming to enhance program effectiveness and efficiency while meeting recipient needs.
U.S. Department of Health and Human Services (HHS)
In this interview, Code for America staff members share how client success, data science, and qualitative research teams work together to consider the responsible deployment of artificial intelligence (AI) in responding to clients who seek assistance with three products.
This paper introduces a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development lifecycle.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
The State of California government published guidelines for the safe and effective use of Generative Artificial (GenAI) within state agencies, in accordance with Governor Newsom's Executive Order N-12-23 on Generative Artificial Intelligence.
The OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. The principles were adopted in 2019; this webpage provides an overview of the principles and key terms.
Organisation for Economic Co-operation and Development (OECD)
This reporting explores how algorithms used to screen prospective tenants, including those waiting for public housing, can block renters from housing based on faulty information.
Artificial intelligence promises exciting new opportunities for the government to make policy, deliver services and engage with residents. But government procurement practices need to adapt if we are to ensure that rapidly-evolving AI tools meet intended purposes, avoid bias, and minimize risks to people, organizations, and communities. This report lays out five distinct challenges related to procuring AI in government.
This playbook provides federal agencies with guidance on implementing AI in a way that is ethical, transparent, and aligned with public trust principles.
U.S. Department of Health and Human Services (HHS)