These principles and best practices for AI developers and employers to center the well-being of workers in the development and deployment of AI in the workplace and to value workers as the essential resources they are.
The Digital Benefit Network's Digital Identity Community of Practice held a session to hear considerations from civil rights technologists and human-centered design practitioners on ways to ensure program security while simultaneously promoting equity, enabling accessibility, and minimizing bias.
This internal glossary defines key terms and concepts related to automating enrollment proofs for public benefits programs to support shared understanding among product and policy teams.
This paper explores how legacy procurement processes in U.S. cities shape the acquisition and governance of AI tools, based on interviews with local government employees.
This blog post shares findings from the February 2025 AI Trust Study on Canada.ca, revealing how Canadians perceive government AI and what builds trust.
This report provides an overview of the task force’s work in assessing, guiding, and recommending policies for the safe, ethical, and effective use of generative AI across Alabama’s executive-branch agencies.
State of Alabama Generative Artificial Intelligence (GenAI) Task Force
A case study describing how a 90-day generative AI (GenAI) pilot using Google’s Gemini tool was conducted across state agencies to assess productivity, creativity, and responsible use in government work.
Colorado Governor's Office of Information Technology (OIT)
This research explores how software engineers are able to work with generative machine learning models. The results explore the benefits of generative code models and the challenges software engineers face when working with their outputs. The authors also argue for the need for intelligent user interfaces that help software engineers effectively work with generative code models.
This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court.
This report offers a critical framework for designing algorithmic impact assessments (AIAs) by drawing lessons from existing impact assessments in areas like environment, privacy, and human rights to ensure accountability and reduce algorithmic harms.